mirror of
https://github.com/psy0rz/zfs_autobackup.git
synced 2025-04-23 23:00:53 +03:00
Compare commits
52 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
81fa5c5bab | ||
|
f1dda6cc9f | ||
|
c5f1e38b18 | ||
|
9e2476ac84 | ||
|
4c5339dedd | ||
|
b115f4b081 | ||
|
8879519e32 | ||
|
b247b0408b | ||
|
a2f4dd4227 | ||
|
c52857f7b9 | ||
|
359bfde4c9 | ||
|
5705afc37f | ||
|
6d4f22b69e | ||
|
7122dc92af | ||
|
843b87f319 | ||
|
7feae675a6 | ||
|
7586cacb49 | ||
|
e0c09e9975 | ||
|
de3dff77b8 | ||
|
a62e793247 | ||
|
439ea6a3bc | ||
|
ff86e3c67f | ||
|
8b8be80ab7 | ||
|
5cca819916 | ||
|
477e980ba2 | ||
|
b817df8779 | ||
|
46580fb500 | ||
|
aa2c283746 | ||
|
16ab4f8183 | ||
|
50f8aba101 | ||
|
771127d34a | ||
|
ea8beee7c8 | ||
|
defbc2d0bf | ||
|
4e4de2de5a | ||
|
de898fc258 | ||
|
bdc156e48d | ||
|
f3caca48f2 | ||
|
c0a8cb33ad | ||
|
feb3972cd7 | ||
|
e30a393d0e | ||
|
f8cd77e6e4 | ||
|
06420978d5 | ||
|
54e590175d | ||
|
6e5a6764c5 | ||
|
a7d05a7064 | ||
|
d90ea7edd2 | ||
|
090a2d1343 | ||
|
7cffec1d26 | ||
|
aac62f3fe6 | ||
|
a12b651d17 | ||
|
62f078eaec | ||
|
fd1e7d5b33 |
2
.github/ISSUE_TEMPLATE/issue.md
vendored
2
.github/ISSUE_TEMPLATE/issue.md
vendored
@ -8,4 +8,4 @@ assignees: ''
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
(Please add the commandline that you use to the issue. Also at least add the output of --verbose. Sometimes it helps if you add the output of --debug-output instead, but its huge, so use an attachment for that.)
|
(Please add the commandline that you use to the issue. AT LEAST add the output of --verbose, but usual --debug is needed as well. Sometimes it helps if you add the output of --debug-output instead, but its huge, so use an attachment for that.)
|
||||||
|
20
.github/workflows/python-publish.yml
vendored
20
.github/workflows/python-publish.yml
vendored
@ -5,7 +5,7 @@ name: Upload Python Package
|
|||||||
|
|
||||||
on:
|
on:
|
||||||
release:
|
release:
|
||||||
types: [created]
|
types: [published]
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
deploy:
|
deploy:
|
||||||
@ -20,20 +20,20 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
python-version: '3.x'
|
python-version: '3.x'
|
||||||
|
|
||||||
- name: Set up Python 2.x
|
# - name: Set up Python 2.x
|
||||||
uses: actions/setup-python@v2
|
# uses: actions/setup-python@v2
|
||||||
with:
|
# with:
|
||||||
python-version: '2.x'
|
# python-version: '2.x'
|
||||||
|
|
||||||
- name: Install dependencies 3.x
|
- name: Install dependencies 3.x
|
||||||
run: |
|
run: |
|
||||||
python -m pip install --upgrade pip
|
python -m pip install --upgrade pip
|
||||||
pip3 install setuptools wheel twine
|
pip3 install setuptools wheel twine
|
||||||
|
|
||||||
- name: Install dependencies 2.x
|
# - name: Install dependencies 2.x
|
||||||
run: |
|
# run: |
|
||||||
python2 -m pip install --upgrade pip
|
# python2 -m pip install --upgrade pip
|
||||||
pip2 install setuptools wheel twine
|
# pip2 install setuptools wheel twine
|
||||||
|
|
||||||
- name: Build and publish
|
- name: Build and publish
|
||||||
env:
|
env:
|
||||||
@ -41,6 +41,6 @@ jobs:
|
|||||||
TWINE_PASSWORD: ${{ secrets.TWINE_PASSWORD }}
|
TWINE_PASSWORD: ${{ secrets.TWINE_PASSWORD }}
|
||||||
run: |
|
run: |
|
||||||
python3 setup.py sdist bdist_wheel
|
python3 setup.py sdist bdist_wheel
|
||||||
python2 setup.py sdist bdist_wheel
|
# python2 setup.py sdist bdist_wheel
|
||||||
twine check dist/*
|
twine check dist/*
|
||||||
twine upload dist/*
|
twine upload dist/*
|
||||||
|
26
.github/workflows/regression.yml
vendored
26
.github/workflows/regression.yml
vendored
@ -46,29 +46,3 @@ jobs:
|
|||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
run: coveralls --service=github || true
|
run: coveralls --service=github || true
|
||||||
|
|
||||||
ubuntu20_python2:
|
|
||||||
runs-on: ubuntu-20.04
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Checkout
|
|
||||||
uses: actions/checkout@v3.5.0
|
|
||||||
|
|
||||||
- name: Set up Python 2.x
|
|
||||||
uses: actions/setup-python@v2
|
|
||||||
with:
|
|
||||||
python-version: '2.x'
|
|
||||||
|
|
||||||
- name: Prepare
|
|
||||||
run: sudo apt update && sudo apt install zfsutils-linux lzop pigz zstd gzip xz-utils lz4 mbuffer && sudo -H pip3 install coverage unittest2 mock==3.0.5 coveralls
|
|
||||||
|
|
||||||
|
|
||||||
- name: Regression test
|
|
||||||
run: sudo -E ./tests/run_tests
|
|
||||||
|
|
||||||
|
|
||||||
- name: Coveralls
|
|
||||||
env:
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
run: coveralls --service=github || true
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -57,9 +57,12 @@ An important feature that's missing from other tools is a reliable `--test` opti
|
|||||||
|
|
||||||
Please look at our wiki to [Get started](https://github.com/psy0rz/zfs_autobackup/wiki).
|
Please look at our wiki to [Get started](https://github.com/psy0rz/zfs_autobackup/wiki).
|
||||||
|
|
||||||
|
Or read the [Full manual](https://github.com/psy0rz/zfs_autobackup/wiki/Manual)
|
||||||
|
|
||||||
# Sponsor list
|
# Sponsor list
|
||||||
|
|
||||||
This project was sponsorred by:
|
This project was sponsored by:
|
||||||
|
|
||||||
* JetBrains (Provided me with a license for their whole professional product line, https://www.jetbrains.com/pycharm/ )
|
* JetBrains
|
||||||
|
* https://rsync.net
|
||||||
* [DatuX](https://www.datux.nl)
|
* [DatuX](https://www.datux.nl)
|
||||||
|
1
scripts/autoupload
Executable file
1
scripts/autoupload
Executable file
@ -0,0 +1 @@
|
|||||||
|
find zfs_autobackup | entr rsync -avx . "$1":zfs_autobackup
|
17
tests/Dockerfile
Normal file
17
tests/Dockerfile
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
FROM alpine:3.18
|
||||||
|
|
||||||
|
|
||||||
|
#base packages
|
||||||
|
RUN apk update
|
||||||
|
RUN apk add py3-pip
|
||||||
|
|
||||||
|
#zfs autobackup tests dependencies
|
||||||
|
RUN apk add zfs openssh lzop pigz zstd gzip xz lz4 mbuffer udev zfs-udev
|
||||||
|
|
||||||
|
|
||||||
|
#python modules
|
||||||
|
COPY requirements.txt /
|
||||||
|
RUN pip3 install -r requirements.txt
|
||||||
|
|
||||||
|
#git repo should be mounted in /app:
|
||||||
|
ENTRYPOINT [ "/app/tests/tests_docker" ]
|
3
tests/autorun_tests_docker
Executable file
3
tests/autorun_tests_docker
Executable file
@ -0,0 +1,3 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
find tests zfs_autobackup -name '*.py' |entr ./tests/run_tests_docker $@
|
@ -1,9 +1,11 @@
|
|||||||
|
import os
|
||||||
# To run tests as non-root, use this hack:
|
# To run tests as non-root, use this hack:
|
||||||
# chmod 4755 /usr/sbin/zpool /usr/sbin/zfs
|
# chmod 4755 /usr/sbin/zpool /usr/sbin/zfs
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
|
import zfs_autobackup.util
|
||||||
|
|
||||||
#dirty hack for this error:
|
#dirty hack for this error:
|
||||||
#AttributeError: module 'collections' has no attribute 'MutableMapping'
|
#AttributeError: module 'collections' has no attribute 'MutableMapping'
|
||||||
|
|
||||||
@ -28,15 +30,17 @@ import contextlib
|
|||||||
import sys
|
import sys
|
||||||
import io
|
import io
|
||||||
|
|
||||||
|
import datetime
|
||||||
|
|
||||||
|
|
||||||
TEST_POOLS="test_source1 test_source2 test_target1"
|
TEST_POOLS="test_source1 test_source2 test_target1"
|
||||||
ZFS_USERSPACE= subprocess.check_output("dpkg-query -W zfsutils-linux |cut -f2", shell=True).decode('utf-8').rstrip()
|
# ZFS_USERSPACE= subprocess.check_output("dpkg-query -W zfsutils-linux |cut -f2", shell=True).decode('utf-8').rstrip()
|
||||||
ZFS_KERNEL= subprocess.check_output("modinfo zfs|grep ^version |sed 's/.* //'", shell=True).decode('utf-8').rstrip()
|
# ZFS_KERNEL= subprocess.check_output("modinfo zfs|grep ^version |sed 's/.* //'", shell=True).decode('utf-8').rstrip()
|
||||||
|
|
||||||
print("###########################################")
|
print("###########################################")
|
||||||
print("#### Unit testing against:")
|
print("#### Unit testing against:")
|
||||||
print("#### Python :"+sys.version.replace("\n", " "))
|
print("#### Python : "+sys.version.replace("\n", " "))
|
||||||
print("#### ZFS userspace :"+ZFS_USERSPACE)
|
print("#### ZFS version : "+subprocess.check_output("zfs --version", shell=True).decode('utf-8').rstrip().replace('\n', ' '))
|
||||||
print("#### ZFS kernel :"+ZFS_KERNEL)
|
|
||||||
print("#############################################")
|
print("#############################################")
|
||||||
|
|
||||||
|
|
||||||
@ -47,6 +51,10 @@ if sys.version_info.major==2:
|
|||||||
else:
|
else:
|
||||||
OutputIO=io.StringIO
|
OutputIO=io.StringIO
|
||||||
|
|
||||||
|
# for when we're using a suid-root python binary during development
|
||||||
|
os.setuid(0)
|
||||||
|
os.setgid(0)
|
||||||
|
|
||||||
|
|
||||||
# for python2 compatibility (python 3 has this already)
|
# for python2 compatibility (python 3 has this already)
|
||||||
@contextlib.contextmanager
|
@contextlib.contextmanager
|
||||||
@ -73,7 +81,7 @@ def redirect_stderr(target):
|
|||||||
def shelltest(cmd):
|
def shelltest(cmd):
|
||||||
"""execute and print result as nice copypastable string for unit tests (adds extra newlines on top/bottom)"""
|
"""execute and print result as nice copypastable string for unit tests (adds extra newlines on top/bottom)"""
|
||||||
|
|
||||||
ret=(subprocess.check_output("SUDO_ASKPASS=./password.sh sudo -A "+cmd , shell=True).decode('utf-8'))
|
ret=(subprocess.check_output(cmd , shell=True).decode('utf-8'))
|
||||||
|
|
||||||
print("######### result of: {}".format(cmd))
|
print("######### result of: {}".format(cmd))
|
||||||
print(ret)
|
print(ret)
|
||||||
@ -85,7 +93,7 @@ def prepare_zpools():
|
|||||||
print("Preparing zfs filesystems...")
|
print("Preparing zfs filesystems...")
|
||||||
|
|
||||||
#need ram blockdevice
|
#need ram blockdevice
|
||||||
subprocess.check_call("modprobe brd rd_size=512000", shell=True)
|
# subprocess.check_call("modprobe brd rd_size=512000", shell=True)
|
||||||
|
|
||||||
#remove old stuff
|
#remove old stuff
|
||||||
subprocess.call("zpool destroy test_source1 2>/dev/null", shell=True)
|
subprocess.call("zpool destroy test_source1 2>/dev/null", shell=True)
|
||||||
@ -105,3 +113,18 @@ def prepare_zpools():
|
|||||||
subprocess.check_call("zfs set autobackup:test=child test_source2/fs2", shell=True)
|
subprocess.check_call("zfs set autobackup:test=child test_source2/fs2", shell=True)
|
||||||
|
|
||||||
print("Prepare done")
|
print("Prepare done")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@contextlib.contextmanager
|
||||||
|
def mocktime(time_str, format="%Y%m%d%H%M%S"):
|
||||||
|
|
||||||
|
def fake_datetime_now():
|
||||||
|
return datetime.datetime.strptime(time_str, format)
|
||||||
|
|
||||||
|
with patch.object(zfs_autobackup.util,'datetime_now_mock', fake_datetime_now()):
|
||||||
|
yield
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -18,6 +18,17 @@ if ! [ -e /root/.ssh/id_rsa ]; then
|
|||||||
ssh -oStrictHostKeyChecking=no localhost true || exit 1
|
ssh -oStrictHostKeyChecking=no localhost true || exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
cat >> ~/.ssh/config <<EOF
|
||||||
|
Host *
|
||||||
|
addkeystoagent yes
|
||||||
|
controlpath ~/.ssh/control-master-%r@%h:%p
|
||||||
|
controlmaster auto
|
||||||
|
controlpersist 3600
|
||||||
|
EOF
|
||||||
|
|
||||||
|
|
||||||
|
modprobe brd rd_size=512000
|
||||||
|
|
||||||
umount /tmp/ZfsCheck*
|
umount /tmp/ZfsCheck*
|
||||||
|
|
||||||
coverage run --branch --source zfs_autobackup -m unittest discover -vvvvf $SCRIPTDIR $@ 2>&1
|
coverage run --branch --source zfs_autobackup -m unittest discover -vvvvf $SCRIPTDIR $@ 2>&1
|
||||||
|
16
tests/run_tests_docker
Executable file
16
tests/run_tests_docker
Executable file
@ -0,0 +1,16 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
#remove stuff from previous local tests
|
||||||
|
zpool destroy test_source1 2>/dev/null || true
|
||||||
|
zpool destroy test_source2 2>/dev/null || true
|
||||||
|
zpool destroy test_target1 2>/dev/null || true
|
||||||
|
|
||||||
|
#is needed
|
||||||
|
modprobe brd rd_size=512000 || true
|
||||||
|
|
||||||
|
# builds and starts a docker container to run the test suite
|
||||||
|
docker build -t zfs-autobackup-test -f tests/Dockerfile .
|
||||||
|
docker run --name zfs-autobackup-test --privileged --rm -it -v .:/app zfs-autobackup-test $@
|
||||||
|
|
@ -9,11 +9,11 @@ class TestCmdPipe(unittest2.TestCase):
|
|||||||
p=CmdPipe(readonly=False, inp=None)
|
p=CmdPipe(readonly=False, inp=None)
|
||||||
err=[]
|
err=[]
|
||||||
out=[]
|
out=[]
|
||||||
p.add(CmdItem(["ls", "-d", "/", "/", "/nonexistent"], stderr_handler=lambda line: err.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2), stdout_handler=lambda line: out.append(line)))
|
p.add(CmdItem(["sh", "-c", "echo out1;echo err1 >&2; echo out2; echo err2 >&2"], stderr_handler=lambda line: err.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,0), stdout_handler=lambda line: out.append(line)))
|
||||||
executed=p.execute()
|
executed=p.execute()
|
||||||
|
|
||||||
self.assertEqual(err, ["ls: cannot access '/nonexistent': No such file or directory"])
|
self.assertEqual(out, ["out1", "out2"])
|
||||||
self.assertEqual(out, ["/","/"])
|
self.assertEqual(err, ["err1","err2"])
|
||||||
self.assertIsNone(executed)
|
self.assertIsNone(executed)
|
||||||
|
|
||||||
def test_input(self):
|
def test_input(self):
|
||||||
@ -56,16 +56,16 @@ class TestCmdPipe(unittest2.TestCase):
|
|||||||
err2=[]
|
err2=[]
|
||||||
err3=[]
|
err3=[]
|
||||||
out=[]
|
out=[]
|
||||||
p.add(CmdItem(["ls", "/nonexistent1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
|
p.add(CmdItem(["sh", "-c", "echo err1 >&2"], stderr_handler=lambda line: err1.append(line), ))
|
||||||
p.add(CmdItem(["ls", "/nonexistent2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
|
p.add(CmdItem(["sh", "-c", "echo err2 >&2"], stderr_handler=lambda line: err2.append(line), ))
|
||||||
p.add(CmdItem(["ls", "/nonexistent3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2), stdout_handler=lambda line: out.append(line)))
|
p.add(CmdItem(["sh", "-c", "echo err3 >&2"], stderr_handler=lambda line: err3.append(line), stdout_handler=lambda line: out.append(line)))
|
||||||
executed=p.execute()
|
executed=p.execute()
|
||||||
|
|
||||||
self.assertEqual(err1, ["ls: cannot access '/nonexistent1': No such file or directory"])
|
self.assertEqual(err1, ["err1"])
|
||||||
self.assertEqual(err2, ["ls: cannot access '/nonexistent2': No such file or directory"])
|
self.assertEqual(err2, ["err2"])
|
||||||
self.assertEqual(err3, ["ls: cannot access '/nonexistent3': No such file or directory"])
|
self.assertEqual(err3, ["err3"])
|
||||||
self.assertEqual(out, [])
|
self.assertEqual(out, [])
|
||||||
self.assertIsNone(executed)
|
self.assertTrue(executed)
|
||||||
|
|
||||||
def test_exitcode(self):
|
def test_exitcode(self):
|
||||||
"""test piped exitcodes """
|
"""test piped exitcodes """
|
||||||
@ -74,9 +74,9 @@ class TestCmdPipe(unittest2.TestCase):
|
|||||||
err2=[]
|
err2=[]
|
||||||
err3=[]
|
err3=[]
|
||||||
out=[]
|
out=[]
|
||||||
p.add(CmdItem(["bash", "-c", "exit 1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,1)))
|
p.add(CmdItem(["sh", "-c", "exit 1"], stderr_handler=lambda line: err1.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,1)))
|
||||||
p.add(CmdItem(["bash", "-c", "exit 2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
|
p.add(CmdItem(["sh", "-c", "exit 2"], stderr_handler=lambda line: err2.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,2)))
|
||||||
p.add(CmdItem(["bash", "-c", "exit 3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,3), stdout_handler=lambda line: out.append(line)))
|
p.add(CmdItem(["sh", "-c", "exit 3"], stderr_handler=lambda line: err3.append(line), exit_handler=lambda exit_code: self.assertEqual(exit_code,3), stdout_handler=lambda line: out.append(line)))
|
||||||
executed=p.execute()
|
executed=p.execute()
|
||||||
|
|
||||||
self.assertEqual(err1, [])
|
self.assertEqual(err1, [])
|
||||||
|
@ -13,10 +13,10 @@ class TestZfsNode(unittest2.TestCase):
|
|||||||
def test_destroymissing(self):
|
def test_destroymissing(self):
|
||||||
|
|
||||||
#initial backup
|
#initial backup
|
||||||
with patch('time.strftime', return_value="test-19101111000000"): #1000 years in past
|
with mocktime("19101111000000"): #1000 years in past
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"): #far in past
|
with mocktime("20101111000000"): #far in past
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
|
@ -29,6 +29,12 @@ class TestZfsEncryption(unittest2.TestCase):
|
|||||||
except:
|
except:
|
||||||
self.skipTest("Encryption not supported on this ZFS version.")
|
self.skipTest("Encryption not supported on this ZFS version.")
|
||||||
|
|
||||||
|
def load_key(self, key, path):
|
||||||
|
|
||||||
|
shelltest("rm /tmp/zfstest.key 2>/dev/null;true")
|
||||||
|
shelltest("echo {} > /tmp/zfstest.key".format(key))
|
||||||
|
shelltest("zfs load-key {}".format(path))
|
||||||
|
|
||||||
def prepare_encrypted_dataset(self, key, path, unload_key=False):
|
def prepare_encrypted_dataset(self, key, path, unload_key=False):
|
||||||
|
|
||||||
# create encrypted source dataset
|
# create encrypted source dataset
|
||||||
@ -49,11 +55,11 @@ class TestZfsEncryption(unittest2.TestCase):
|
|||||||
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsourcekeyless", unload_key=True) # raw mode shouldn't need a key
|
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsourcekeyless", unload_key=True) # raw mode shouldn't need a key
|
||||||
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty --exclude-received".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty --exclude-received".split(" ")).run())
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --no-snapshot --exclude-received".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --no-snapshot --exclude-received".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty --exclude-received".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty --exclude-received".split(" ")).run())
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --no-snapshot --exclude-received".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --no-snapshot --exclude-received".split(" ")).run())
|
||||||
|
|
||||||
@ -86,11 +92,11 @@ test_target1/test_source2/fs2/sub encryption
|
|||||||
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
||||||
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --decrypt --allow-empty --exclude-received".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --decrypt --allow-empty --exclude-received".split(" ")).run())
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --decrypt --no-snapshot --exclude-received".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --decrypt --no-snapshot --exclude-received".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --decrypt --allow-empty --exclude-received".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --decrypt --allow-empty --exclude-received".split(" ")).run())
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --decrypt --no-snapshot --exclude-received".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --decrypt --no-snapshot --exclude-received".split(" ")).run())
|
||||||
|
|
||||||
@ -121,13 +127,13 @@ test_target1/test_source2/fs2/sub encryptionroot -
|
|||||||
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
||||||
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty --exclude-received --clear-mountpoint".split(" ")).run())
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot --exclude-received".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot --exclude-received --clear-mountpoint".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --encrypt --debug --allow-empty --exclude-received --clear-mountpoint".split(" ")).run())
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot --exclude-received".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --debug --no-snapshot --exclude-received --clear-mountpoint".split(" ")).run())
|
||||||
|
|
||||||
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
|
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
|
||||||
self.assertEqual(r, """
|
self.assertEqual(r, """
|
||||||
@ -156,14 +162,14 @@ test_target1/test_source2/fs2/sub encryptionroot -
|
|||||||
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
||||||
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
"test test_target1 --verbose --no-progress --decrypt --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
|
"test test_target1 --verbose --no-progress --decrypt --encrypt --debug --allow-empty --exclude-received --clear-mountpoint".split(" ")).run())
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
"test test_target1/encryptedtarget --verbose --no-progress --decrypt --encrypt --debug --no-snapshot --exclude-received".split(
|
"test test_target1/encryptedtarget --verbose --no-progress --decrypt --encrypt --debug --no-snapshot --exclude-received --clear-mountpoint".split(
|
||||||
" ")).run())
|
" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
"test test_target1 --verbose --no-progress --decrypt --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
|
"test test_target1 --verbose --no-progress --decrypt --encrypt --debug --allow-empty --exclude-received".split(" ")).run())
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
@ -191,3 +197,117 @@ test_target1/test_source2/fs2 encryptionroot -
|
|||||||
test_target1/test_source2/fs2/sub encryptionroot - -
|
test_target1/test_source2/fs2/sub encryptionroot - -
|
||||||
""")
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def test_raw_invalid_snapshot(self):
|
||||||
|
"""in raw mode, its not allowed to have any newer snaphots on target, #219"""
|
||||||
|
|
||||||
|
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
||||||
|
|
||||||
|
with mocktime("20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress".split(" ")).run())
|
||||||
|
|
||||||
|
#this is invalid in raw mode
|
||||||
|
shelltest("zfs snapshot test_target1/test_source1/fs1/encryptedsource@incompatible")
|
||||||
|
|
||||||
|
with mocktime("20101111000001"):
|
||||||
|
#should fail because of incompatble snapshot
|
||||||
|
self.assertEqual(ZfsAutobackup("test test_target1 --verbose --no-progress --allow-empty".split(" ")).run(),1)
|
||||||
|
#should destroy incompatible and continue
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --no-snapshot --destroy-incompatible".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
|
r = shelltest("zfs get -r -t filesystem encryptionroot test_target1")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
NAME PROPERTY VALUE SOURCE
|
||||||
|
test_target1 encryptionroot - -
|
||||||
|
test_target1/test_source1 encryptionroot - -
|
||||||
|
test_target1/test_source1/fs1 encryptionroot - -
|
||||||
|
test_target1/test_source1/fs1/encryptedsource encryptionroot test_target1/test_source1/fs1/encryptedsource -
|
||||||
|
test_target1/test_source1/fs1/sub encryptionroot - -
|
||||||
|
test_target1/test_source2 encryptionroot - -
|
||||||
|
test_target1/test_source2/fs2 encryptionroot - -
|
||||||
|
test_target1/test_source2/fs2/sub encryptionroot - -
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_resume_encrypt_with_no_key(self):
|
||||||
|
"""test what happens if target encryption key not loaded (this led to a kernel crash of freebsd with 2.1.x i think) while trying to resume"""
|
||||||
|
|
||||||
|
self.prepare_encrypted_dataset("11111111", "test_source1/fs1/encryptedsource")
|
||||||
|
self.prepare_encrypted_dataset("22222222", "test_target1/encryptedtarget")
|
||||||
|
|
||||||
|
|
||||||
|
with mocktime("20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1/encryptedtarget --verbose --no-progress --encrypt --allow-empty --exclude-received --clear-mountpoint".split(" ")).run())
|
||||||
|
|
||||||
|
r = shelltest("zfs set compress=off test_source1 test_target1")
|
||||||
|
|
||||||
|
# big change on source
|
||||||
|
r = shelltest("dd if=/dev/zero of=/test_source1/fs1/data bs=250M count=1")
|
||||||
|
|
||||||
|
# waste space on target
|
||||||
|
r = shelltest("dd if=/dev/zero of=/test_target1/waste bs=250M count=1")
|
||||||
|
|
||||||
|
# should fail and leave resume token
|
||||||
|
with mocktime("20101111000001"):
|
||||||
|
self.assertTrue(ZfsAutobackup(
|
||||||
|
"test test_target1/encryptedtarget --verbose --no-progress --encrypt --exclude-received --allow-empty --clear-mountpoint".split(
|
||||||
|
" ")).run())
|
||||||
|
#
|
||||||
|
# free up space
|
||||||
|
r = shelltest("rm /test_target1/waste")
|
||||||
|
|
||||||
|
# sync
|
||||||
|
r = shelltest("zfs umount test_target1")
|
||||||
|
r = shelltest("zfs mount test_target1")
|
||||||
|
|
||||||
|
#
|
||||||
|
# #unload key
|
||||||
|
shelltest("zfs unload-key test_target1/encryptedtarget")
|
||||||
|
|
||||||
|
# resume should fail
|
||||||
|
with mocktime("20101111000001"):
|
||||||
|
self.assertEqual(ZfsAutobackup(
|
||||||
|
"test test_target1/encryptedtarget --verbose --no-progress --encrypt --exclude-received --allow-empty --no-snapshot --clear-mountpoint".split(
|
||||||
|
" ")).run(),3)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#NOTE: On some versions this leaves 2 weird sub-datasets that should'nt be there (its probably a zfs bug?)
|
||||||
|
#so we ignore this, and just make sure the backup resumes correctly after reloading the key.
|
||||||
|
# r = shelltest("zfs get -r -t all encryptionroot test_target1")
|
||||||
|
# self.assertEqual(r, """
|
||||||
|
# NAME PROPERTY VALUE SOURCE
|
||||||
|
# test_target1 encryptionroot - -
|
||||||
|
# test_target1/encryptedtarget encryptionroot test_target1/encryptedtarget -
|
||||||
|
# test_target1/encryptedtarget/test_source1 encryptionroot test_target1/encryptedtarget -
|
||||||
|
# test_target1/encryptedtarget/test_source1/fs1 encryptionroot test_target1/encryptedtarget -
|
||||||
|
# test_target1/encryptedtarget/test_source1/fs1@test-20101111000000 encryptionroot test_target1/encryptedtarget -
|
||||||
|
# test_target1/encryptedtarget/test_source1/fs1/encryptedsource encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsource -
|
||||||
|
# test_target1/encryptedtarget/test_source1/fs1/encryptedsource@test-20101111000000 encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsource -
|
||||||
|
# test_target1/encryptedtarget/test_source1/fs1/encryptedsource@test-20101111000001 encryptionroot test_target1/encryptedtarget/test_source1/fs1/encryptedsource -
|
||||||
|
# test_target1/encryptedtarget/test_source1/fs1/sub encryptionroot test_target1/encryptedtarget -
|
||||||
|
# test_target1/encryptedtarget/test_source1/fs1/sub@test-20101111000000 encryptionroot test_target1/encryptedtarget -
|
||||||
|
# test_target1/encryptedtarget/test_source1/fs1/sub/sub encryptionroot - -
|
||||||
|
# test_target1/encryptedtarget/test_source1/fs1/sub/sub@test-20101111000001 encryptionroot - -
|
||||||
|
# test_target1/encryptedtarget/test_source2 encryptionroot test_target1/encryptedtarget -
|
||||||
|
# test_target1/encryptedtarget/test_source2/fs2 encryptionroot test_target1/encryptedtarget -
|
||||||
|
# test_target1/encryptedtarget/test_source2/fs2/sub encryptionroot test_target1/encryptedtarget -
|
||||||
|
# test_target1/encryptedtarget/test_source2/fs2/sub@test-20101111000000 encryptionroot test_target1/encryptedtarget -
|
||||||
|
# test_target1/encryptedtarget/test_source2/fs2/sub/sub encryptionroot - -
|
||||||
|
# test_target1/encryptedtarget/test_source2/fs2/sub/sub@test-20101111000001 encryptionroot - -
|
||||||
|
# """)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#reload key and resume correctly.
|
||||||
|
self.load_key("22222222", "test_target1/encryptedtarget")
|
||||||
|
|
||||||
|
# resume should complete
|
||||||
|
with mocktime("20101111000001"):
|
||||||
|
self.assertEqual(ZfsAutobackup(
|
||||||
|
"test test_target1/encryptedtarget --verbose --no-progress --encrypt --exclude-received --allow-empty --no-snapshot --clear-mountpoint".split(
|
||||||
|
" ")).run(),0)
|
||||||
|
|
||||||
|
@ -33,9 +33,9 @@ class TestExecuteNode(unittest2.TestCase):
|
|||||||
|
|
||||||
#return std err as well, trigger stderr by listing something non existing
|
#return std err as well, trigger stderr by listing something non existing
|
||||||
with self.subTest("stderr return"):
|
with self.subTest("stderr return"):
|
||||||
(stdout, stderr)=node.run(["ls", "nonexistingfile"], return_stderr=True, valid_exitcodes=[2])
|
(stdout, stderr)=node.run(["sh", "-c", "echo bla >&2"], return_stderr=True, valid_exitcodes=[0])
|
||||||
self.assertEqual(stdout,[])
|
self.assertEqual(stdout,[])
|
||||||
self.assertRegex(stderr[0],"nonexistingfile")
|
self.assertRegex(stderr[0],"bla")
|
||||||
|
|
||||||
#slow command, make sure things dont exit too early
|
#slow command, make sure things dont exit too early
|
||||||
with self.subTest("early exit test"):
|
with self.subTest("early exit test"):
|
||||||
@ -110,19 +110,17 @@ class TestExecuteNode(unittest2.TestCase):
|
|||||||
|
|
||||||
with self.subTest("check stderr on pipe output side"):
|
with self.subTest("check stderr on pipe output side"):
|
||||||
output=nodea.run(["true"], pipe=True, valid_exitcodes=[0])
|
output=nodea.run(["true"], pipe=True, valid_exitcodes=[0])
|
||||||
(stdout, stderr)=nodeb.run(["ls", "nonexistingfile"], inp=output, return_stderr=True, valid_exitcodes=[2])
|
(stdout, stderr)=nodeb.run(["sh", "-c", "echo bla >&2"], inp=output, return_stderr=True, valid_exitcodes=[0])
|
||||||
self.assertEqual(stdout,[])
|
self.assertEqual(stdout,[])
|
||||||
self.assertRegex(stderr[0], "nonexistingfile" )
|
self.assertRegex(stderr[0], "bla" )
|
||||||
|
|
||||||
with self.subTest("check stderr on pipe input side (should be only printed)"):
|
with self.subTest("check stderr on pipe input side (should be only printed)"):
|
||||||
output=nodea.run(["ls", "nonexistingfile"], pipe=True, valid_exitcodes=[2])
|
output=nodea.run(["sh", "-c", "echo bla >&2"], pipe=True, valid_exitcodes=[0])
|
||||||
(stdout, stderr)=nodeb.run(["true"], inp=output, return_stderr=True, valid_exitcodes=[0])
|
(stdout, stderr)=nodeb.run(["true"], inp=output, return_stderr=True, valid_exitcodes=[0])
|
||||||
self.assertEqual(stdout,[])
|
self.assertEqual(stdout,[])
|
||||||
self.assertEqual(stderr,[])
|
self.assertEqual(stderr,[])
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def test_pipe_local_local(self):
|
def test_pipe_local_local(self):
|
||||||
nodea=ExecuteNode(debug_output=True)
|
nodea=ExecuteNode(debug_output=True)
|
||||||
nodeb=ExecuteNode(debug_output=True)
|
nodeb=ExecuteNode(debug_output=True)
|
||||||
@ -209,5 +207,3 @@ class TestExecuteNode(unittest2.TestCase):
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
unittest.main()
|
|
||||||
|
@ -32,7 +32,7 @@ class TestExternalFailures(unittest2.TestCase):
|
|||||||
def test_initial_resume(self):
|
def test_initial_resume(self):
|
||||||
|
|
||||||
# inital backup, leaves resume token
|
# inital backup, leaves resume token
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.generate_resume()
|
self.generate_resume()
|
||||||
|
|
||||||
# --test should resume and succeed
|
# --test should resume and succeed
|
||||||
@ -42,12 +42,7 @@ class TestExternalFailures(unittest2.TestCase):
|
|||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
|
|
||||||
# did we really resume?
|
self.assertIn(": resuming", buf.getvalue())
|
||||||
if "0.6.5" in ZFS_USERSPACE:
|
|
||||||
# abort this late, for beter coverage
|
|
||||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
|
||||||
else:
|
|
||||||
self.assertIn(": resuming", buf.getvalue())
|
|
||||||
|
|
||||||
# should resume and succeed
|
# should resume and succeed
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
@ -56,12 +51,7 @@ class TestExternalFailures(unittest2.TestCase):
|
|||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
|
|
||||||
# did we really resume?
|
self.assertIn(": resuming", buf.getvalue())
|
||||||
if "0.6.5" in ZFS_USERSPACE:
|
|
||||||
# abort this late, for beter coverage
|
|
||||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
|
||||||
else:
|
|
||||||
self.assertIn(": resuming", buf.getvalue())
|
|
||||||
|
|
||||||
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
||||||
self.assertMultiLineEqual(r, """
|
self.assertMultiLineEqual(r, """
|
||||||
@ -81,11 +71,11 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
def test_incremental_resume(self):
|
def test_incremental_resume(self):
|
||||||
|
|
||||||
# initial backup
|
# initial backup
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
# incremental backup leaves resume token
|
# incremental backup leaves resume token
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.generate_resume()
|
self.generate_resume()
|
||||||
|
|
||||||
# --test should resume and succeed
|
# --test should resume and succeed
|
||||||
@ -95,12 +85,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
|
|
||||||
# did we really resume?
|
self.assertIn(": resuming", buf.getvalue())
|
||||||
if "0.6.5" in ZFS_USERSPACE:
|
|
||||||
# abort this late, for beter coverage
|
|
||||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
|
||||||
else:
|
|
||||||
self.assertIn(": resuming", buf.getvalue())
|
|
||||||
|
|
||||||
# should resume and succeed
|
# should resume and succeed
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
@ -110,11 +95,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
|
|
||||||
# did we really resume?
|
# did we really resume?
|
||||||
if "0.6.5" in ZFS_USERSPACE:
|
self.assertIn(": resuming", buf.getvalue())
|
||||||
# abort this late, for beter coverage
|
|
||||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
|
||||||
else:
|
|
||||||
self.assertIn(": resuming", buf.getvalue())
|
|
||||||
|
|
||||||
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
||||||
self.assertMultiLineEqual(r, """
|
self.assertMultiLineEqual(r, """
|
||||||
@ -134,11 +115,9 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
# generate an invalid resume token, and verify if its aborted automaticly
|
# generate an invalid resume token, and verify if its aborted automaticly
|
||||||
def test_initial_resumeabort(self):
|
def test_initial_resumeabort(self):
|
||||||
|
|
||||||
if "0.6.5" in ZFS_USERSPACE:
|
|
||||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
|
||||||
|
|
||||||
# inital backup, leaves resume token
|
# inital backup, leaves resume token
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.generate_resume()
|
self.generate_resume()
|
||||||
|
|
||||||
# remove corresponding source snapshot, so it becomes invalid
|
# remove corresponding source snapshot, so it becomes invalid
|
||||||
@ -148,11 +127,11 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
shelltest("zfs destroy test_target1/test_source1/fs1/sub; true")
|
shelltest("zfs destroy test_target1/test_source1/fs1/sub; true")
|
||||||
|
|
||||||
# --test try again, should abort old resume
|
# --test try again, should abort old resume
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
# try again, should abort old resume
|
# try again, should abort old resume
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
||||||
@ -172,26 +151,23 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
# generate an invalid resume token, and verify if its aborted automaticly
|
# generate an invalid resume token, and verify if its aborted automaticly
|
||||||
def test_incremental_resumeabort(self):
|
def test_incremental_resumeabort(self):
|
||||||
|
|
||||||
if "0.6.5" in ZFS_USERSPACE:
|
|
||||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
|
||||||
|
|
||||||
# initial backup
|
# initial backup
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
# icremental backup, leaves resume token
|
# icremental backup, leaves resume token
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.generate_resume()
|
self.generate_resume()
|
||||||
|
|
||||||
# remove corresponding source snapshot, so it becomes invalid
|
# remove corresponding source snapshot, so it becomes invalid
|
||||||
shelltest("zfs destroy test_source1/fs1@test-20101111000001")
|
shelltest("zfs destroy test_source1/fs1@test-20101111000001")
|
||||||
|
|
||||||
# --test try again, should abort old resume
|
# --test try again, should abort old resume
|
||||||
with patch('time.strftime', return_value="test-20101111000002"):
|
with mocktime("20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
# try again, should abort old resume
|
# try again, should abort old resume
|
||||||
with patch('time.strftime', return_value="test-20101111000002"):
|
with mocktime("20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
r = shelltest("zfs list -H -o name -r -t all test_target1")
|
||||||
@ -212,22 +188,19 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
# create a resume situation, where the other side doesnt want the snapshot anymore ( should abort resume )
|
# create a resume situation, where the other side doesnt want the snapshot anymore ( should abort resume )
|
||||||
def test_abort_unwanted_resume(self):
|
def test_abort_unwanted_resume(self):
|
||||||
|
|
||||||
if "0.6.5" in ZFS_USERSPACE:
|
with mocktime("20101111000000"):
|
||||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
# generate resume
|
# generate resume
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.generate_resume()
|
self.generate_resume()
|
||||||
|
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf):
|
with redirect_stdout(buf):
|
||||||
# incremental, doesnt want previous anymore
|
# incremental, doesnt want previous anymore
|
||||||
with patch('time.strftime', return_value="test-20101111000002"):
|
with mocktime("20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
"test test_target1 --no-progress --verbose --keep-target=0 --allow-empty".split(" ")).run())
|
"test test_target1 --no-progress --verbose --keep-target=0 --allow-empty --debug".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
|
|
||||||
@ -250,14 +223,11 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
|||||||
# test with empty snapshot list (this was a bug)
|
# test with empty snapshot list (this was a bug)
|
||||||
def test_abort_resume_emptysnapshotlist(self):
|
def test_abort_resume_emptysnapshotlist(self):
|
||||||
|
|
||||||
if "0.6.5" in ZFS_USERSPACE:
|
with mocktime("20101111000000"):
|
||||||
self.skipTest("Resume not supported in this ZFS userspace version")
|
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
# generate resume
|
# generate resume
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.generate_resume()
|
self.generate_resume()
|
||||||
|
|
||||||
shelltest("zfs destroy test_source1/fs1@test-20101111000001")
|
shelltest("zfs destroy test_source1/fs1@test-20101111000001")
|
||||||
@ -265,7 +235,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
|||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf):
|
with redirect_stdout(buf):
|
||||||
# incremental, doesnt want previous anymore
|
# incremental, doesnt want previous anymore
|
||||||
with patch('time.strftime', return_value="test-20101111000002"):
|
with mocktime("20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
"test test_target1 --no-progress --verbose --no-snapshot".split(
|
"test test_target1 --no-progress --verbose --no-snapshot".split(
|
||||||
" ")).run())
|
" ")).run())
|
||||||
@ -277,14 +247,14 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
|||||||
|
|
||||||
def test_missing_common(self):
|
def test_missing_common(self):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
# remove common snapshot and leave nothing
|
# remove common snapshot and leave nothing
|
||||||
shelltest("zfs release zfs_autobackup:test test_source1/fs1@test-20101111000000")
|
shelltest("zfs release zfs_autobackup:test test_source1/fs1@test-20101111000000")
|
||||||
shelltest("zfs destroy test_source1/fs1@test-20101111000000")
|
shelltest("zfs destroy test_source1/fs1@test-20101111000000")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
#UPDATE: offcourse the one thing that wasn't tested had a bug :( (in ExecuteNode.run()).
|
#UPDATE: offcourse the one thing that wasn't tested had a bug :( (in ExecuteNode.run()).
|
||||||
@ -295,7 +265,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
|||||||
# #recreate target pool without any features
|
# #recreate target pool without any features
|
||||||
# # shelltest("zfs set compress=on test_source1; zpool destroy test_target1; zpool create test_target1 -o feature@project_quota=disabled /dev/ram2")
|
# # shelltest("zfs set compress=on test_source1; zpool destroy test_target1; zpool create test_target1 -o feature@project_quota=disabled /dev/ram2")
|
||||||
#
|
#
|
||||||
# with patch('time.strftime', return_value="test-20101111000000"):
|
# with mocktime("20101111000000"):
|
||||||
# self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --no-progress".split(" ")).run())
|
# self.assertFalse(ZfsAutobackup("test test_target1 --verbose --allow-empty --no-progress".split(" ")).run())
|
||||||
#
|
#
|
||||||
# r = shelltest("zfs list -H -o name -r -t all test_target1")
|
# r = shelltest("zfs list -H -o name -r -t all test_target1")
|
||||||
|
@ -11,17 +11,17 @@ class TestZfsNode(unittest2.TestCase):
|
|||||||
def test_keepsource0target10queuedsend(self):
|
def test_keepsource0target10queuedsend(self):
|
||||||
"""Test if thinner doesnt destroy too much early on if there are no common snapshots YET. Issue #84"""
|
"""Test if thinner doesnt destroy too much early on if there are no common snapshots YET. Issue #84"""
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
"test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty --no-send".split(
|
"test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty --no-send".split(
|
||||||
" ")).run())
|
" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
"test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty --no-send".split(
|
"test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty --no-send".split(
|
||||||
" ")).run())
|
" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000002"):
|
with mocktime("20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
"test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty".split(
|
"test test_target1 --no-progress --verbose --keep-source=0 --keep-target=10 --allow-empty".split(
|
||||||
" ")).run())
|
" ")).run())
|
||||||
@ -65,7 +65,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
|||||||
shelltest("zfs set autobackup:test=true test_target1/target_shouldnotbeexcluded")
|
shelltest("zfs set autobackup:test=true test_target1/target_shouldnotbeexcluded")
|
||||||
shelltest("zfs create test_target1/target")
|
shelltest("zfs create test_target1/target")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
"test test_target1/target --no-progress --verbose --allow-empty".split(
|
"test test_target1/target --no-progress --verbose --allow-empty".split(
|
||||||
" ")).run())
|
" ")).run())
|
||||||
|
@ -33,26 +33,28 @@ class TestZfsScaling(unittest2.TestCase):
|
|||||||
run_counter=0
|
run_counter=0
|
||||||
with patch.object(ExecuteNode,'run', run_count) as p:
|
with patch.object(ExecuteNode,'run', run_count) as p:
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101112000000"):
|
with mocktime("20101112000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
#this triggers if you make a change with an impact of more than O(snapshot_count/2)
|
#this triggers if you make a change with an impact of more than O(snapshot_count/2)
|
||||||
expected_runs=343
|
expected_runs=342
|
||||||
print("ACTUAL RUNS: {}".format(run_counter))
|
print("EXPECTED RUNS: {}".format(expected_runs))
|
||||||
|
print("ACTUAL RUNS : {}".format(run_counter))
|
||||||
self.assertLess(abs(run_counter-expected_runs), snapshot_count/2)
|
self.assertLess(abs(run_counter-expected_runs), snapshot_count/2)
|
||||||
|
|
||||||
|
|
||||||
run_counter=0
|
run_counter=0
|
||||||
with patch.object(ExecuteNode,'run', run_count) as p:
|
with patch.object(ExecuteNode,'run', run_count) as p:
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101112000001"):
|
with mocktime("20101112000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=10000 --keep-target=10000 --no-holds --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
#this triggers if you make a change with a performance impact of more than O(snapshot_count/2)
|
#this triggers if you make a change with a performance impact of more than O(snapshot_count/2)
|
||||||
expected_runs=47
|
expected_runs=47
|
||||||
print("ACTUAL RUNS: {}".format(run_counter))
|
print("EXPECTED RUNS: {}".format(expected_runs))
|
||||||
|
print("ACTUAL RUNS : {}".format(run_counter))
|
||||||
self.assertLess(abs(run_counter-expected_runs), snapshot_count/2)
|
self.assertLess(abs(run_counter-expected_runs), snapshot_count/2)
|
||||||
|
|
||||||
def test_manydatasets(self):
|
def test_manydatasets(self):
|
||||||
@ -73,12 +75,12 @@ class TestZfsScaling(unittest2.TestCase):
|
|||||||
run_counter=0
|
run_counter=0
|
||||||
with patch.object(ExecuteNode,'run', run_count) as p:
|
with patch.object(ExecuteNode,'run', run_count) as p:
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101112000000"):
|
with mocktime("20101112000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
#this triggers if you make a change with an impact of more than O(snapshot_count/2)
|
#this triggers if you make a change with an impact of more than O(snapshot_count/2)`
|
||||||
expected_runs=636
|
expected_runs=842
|
||||||
print("EXPECTED RUNS: {}".format(expected_runs))
|
print("EXPECTED RUNS: {}".format(expected_runs))
|
||||||
print("ACTUAL RUNS: {}".format(run_counter))
|
print("ACTUAL RUNS: {}".format(run_counter))
|
||||||
self.assertLess(abs(run_counter-expected_runs), dataset_count/2)
|
self.assertLess(abs(run_counter-expected_runs), dataset_count/2)
|
||||||
@ -88,12 +90,12 @@ class TestZfsScaling(unittest2.TestCase):
|
|||||||
run_counter=0
|
run_counter=0
|
||||||
with patch.object(ExecuteNode,'run', run_count) as p:
|
with patch.object(ExecuteNode,'run', run_count) as p:
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101112000001"):
|
with mocktime("20101112000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-holds --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
#this triggers if you make a change with a performance impact of more than O(snapshot_count/2)
|
#this triggers if you make a change with a performance impact of more than O(snapshot_count/2)
|
||||||
expected_runs=842
|
expected_runs=1047
|
||||||
print("EXPECTED RUNS: {}".format(expected_runs))
|
print("EXPECTED RUNS: {}".format(expected_runs))
|
||||||
print("ACTUAL RUNS: {}".format(run_counter))
|
print("ACTUAL RUNS: {}".format(run_counter))
|
||||||
self.assertLess(abs(run_counter-expected_runs), dataset_count/2)
|
self.assertLess(abs(run_counter-expected_runs), dataset_count/2)
|
||||||
|
@ -14,15 +14,15 @@ class TestSendRecvPipes(unittest2.TestCase):
|
|||||||
"""send basics (remote/local send pipe)"""
|
"""send basics (remote/local send pipe)"""
|
||||||
|
|
||||||
with self.subTest("local local pipe"):
|
with self.subTest("local local pipe"):
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
|
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress", "--clear-mountpoint",
|
||||||
"--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
|
"--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
|
||||||
|
|
||||||
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
||||||
|
|
||||||
with self.subTest("remote local pipe"):
|
with self.subTest("remote local pipe"):
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
|
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
|
||||||
"--ssh-source=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
|
"--ssh-source=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
|
||||||
@ -30,7 +30,7 @@ class TestSendRecvPipes(unittest2.TestCase):
|
|||||||
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
||||||
|
|
||||||
with self.subTest("local remote pipe"):
|
with self.subTest("local remote pipe"):
|
||||||
with patch('time.strftime', return_value="test-20101111000002"):
|
with mocktime("20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
|
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
|
||||||
"--ssh-target=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
|
"--ssh-target=localhost", "--send-pipe=dd bs=1M", "--recv-pipe=dd bs=2M"]).run())
|
||||||
@ -38,7 +38,7 @@ class TestSendRecvPipes(unittest2.TestCase):
|
|||||||
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
||||||
|
|
||||||
with self.subTest("remote remote pipe"):
|
with self.subTest("remote remote pipe"):
|
||||||
with patch('time.strftime', return_value="test-20101111000003"):
|
with mocktime("20101111000003"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
|
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
|
||||||
"--ssh-source=localhost", "--ssh-target=localhost", "--send-pipe=dd bs=1M",
|
"--ssh-source=localhost", "--ssh-target=localhost", "--send-pipe=dd bs=1M",
|
||||||
@ -72,7 +72,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
|
|||||||
|
|
||||||
for compress in zfs_autobackup.compressors.COMPRESS_CMDS.keys():
|
for compress in zfs_autobackup.compressors.COMPRESS_CMDS.keys():
|
||||||
with self.subTest("compress " + compress):
|
with self.subTest("compress " + compress):
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--verbose",
|
["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--verbose",
|
||||||
"--compress=" + compress]).run())
|
"--compress=" + compress]).run())
|
||||||
@ -83,15 +83,14 @@ test_target1/test_source2/fs2/sub@test-20101111000003
|
|||||||
"""test different buffer configurations"""
|
"""test different buffer configurations"""
|
||||||
|
|
||||||
with self.subTest("local local pipe"):
|
with self.subTest("local local pipe"):
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
|
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress", "--clear-mountpoint", "--buffer=1M"]).run())
|
||||||
"--buffer=1M"]).run())
|
|
||||||
|
|
||||||
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
||||||
|
|
||||||
with self.subTest("remote local pipe"):
|
with self.subTest("remote local pipe"):
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
["test", "test_target1", "--allow-empty", "--verbose", "--exclude-received", "--no-holds",
|
["test", "test_target1", "--allow-empty", "--verbose", "--exclude-received", "--no-holds",
|
||||||
"--no-progress", "--ssh-source=localhost", "--buffer=1M"]).run())
|
"--no-progress", "--ssh-source=localhost", "--buffer=1M"]).run())
|
||||||
@ -99,7 +98,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
|
|||||||
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
||||||
|
|
||||||
with self.subTest("local remote pipe"):
|
with self.subTest("local remote pipe"):
|
||||||
with patch('time.strftime', return_value="test-20101111000002"):
|
with mocktime("20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
|
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
|
||||||
"--ssh-target=localhost", "--buffer=1M"]).run())
|
"--ssh-target=localhost", "--buffer=1M"]).run())
|
||||||
@ -107,7 +106,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
|
|||||||
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
shelltest("zfs destroy -r test_target1/test_source1/fs1/sub")
|
||||||
|
|
||||||
with self.subTest("remote remote pipe"):
|
with self.subTest("remote remote pipe"):
|
||||||
with patch('time.strftime', return_value="test-20101111000003"):
|
with mocktime("20101111000003"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
|
["test", "test_target1", "--allow-empty", "--exclude-received", "--no-holds", "--no-progress",
|
||||||
"--ssh-source=localhost", "--ssh-target=localhost", "--buffer=1M"]).run())
|
"--ssh-source=localhost", "--ssh-target=localhost", "--buffer=1M"]).run())
|
||||||
@ -139,7 +138,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
|
|||||||
"""test rate limit"""
|
"""test rate limit"""
|
||||||
|
|
||||||
start = time.time()
|
start = time.time()
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup(
|
self.assertFalse(ZfsAutobackup(
|
||||||
["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--rate=50k"]).run())
|
["test", "test_target1", "--exclude-received", "--no-holds", "--no-progress", "--rate=50k"]).run())
|
||||||
|
|
||||||
|
@ -85,7 +85,7 @@ class TestThinner(unittest2.TestCase):
|
|||||||
if random.random()>=0.5:
|
if random.random()>=0.5:
|
||||||
things.append(Thing(now))
|
things.append(Thing(now))
|
||||||
|
|
||||||
(keeps, removes)=thinner.thin(things, now=now)
|
(keeps, removes)=thinner.thin(things, keep_objects=[], now=now)
|
||||||
things=keeps
|
things=keeps
|
||||||
|
|
||||||
|
|
||||||
@ -143,7 +143,7 @@ class TestThinner(unittest2.TestCase):
|
|||||||
if random.random()>=0.5:
|
if random.random()>=0.5:
|
||||||
things.append(Thing(now))
|
things.append(Thing(now))
|
||||||
|
|
||||||
(things, removes)=thinner.thin(things, now=now)
|
(things, removes)=thinner.thin(things, keep_objects=[], now=now)
|
||||||
|
|
||||||
result=[]
|
result=[]
|
||||||
for thing in things:
|
for thing in things:
|
||||||
|
@ -38,7 +38,7 @@ class TestZfsVerify(unittest2.TestCase):
|
|||||||
shelltest("dd if=/dev/urandom of=/dev/zvol/test_source1/fs1/bad_zvol count=1 bs=512k")
|
shelltest("dd if=/dev/urandom of=/dev/zvol/test_source1/fs1/bad_zvol count=1 bs=512k")
|
||||||
|
|
||||||
#create backup
|
#create backup
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --no-holds".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-progress --no-holds".split(" ")).run())
|
||||||
|
|
||||||
#Do an ugly hack to create a fault in the bad filesystem
|
#Do an ugly hack to create a fault in the bad filesystem
|
||||||
|
@ -35,7 +35,7 @@ class TestZfsAutobackup(unittest2.TestCase):
|
|||||||
def test_snapshotmode(self):
|
def test_snapshotmode(self):
|
||||||
"""test snapshot tool mode"""
|
"""test snapshot tool mode"""
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test --no-progress --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -55,11 +55,12 @@ test_target1
|
|||||||
""")
|
""")
|
||||||
|
|
||||||
def test_defaults(self):
|
def test_defaults(self):
|
||||||
|
self.maxDiff=2000
|
||||||
|
|
||||||
with self.subTest("no datasets selected"):
|
with self.subTest("no datasets selected"):
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stderr(buf):
|
with redirect_stderr(buf):
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertTrue(ZfsAutobackup("nonexisting test_target1 --verbose --debug --no-progress".split(" ")).run())
|
self.assertTrue(ZfsAutobackup("nonexisting test_target1 --verbose --debug --no-progress".split(" ")).run())
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(buf.getvalue())
|
||||||
@ -69,7 +70,7 @@ test_target1
|
|||||||
|
|
||||||
with self.subTest("defaults with full verbose and debug"):
|
with self.subTest("defaults with full verbose and debug"):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --debug --no-progress".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --debug --no-progress".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -98,7 +99,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
""")
|
""")
|
||||||
|
|
||||||
with self.subTest("bare defaults, allow empty"):
|
with self.subTest("bare defaults, allow empty"):
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --no-progress".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --no-progress".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
@ -168,47 +169,43 @@ test_target1/test_source2/fs2/sub@test-20101111000001 userrefs 1 -
|
|||||||
""")
|
""")
|
||||||
|
|
||||||
#make sure time handling is correctly. try to make snapshots a year appart and verify that only snapshots mostly 1y old are kept
|
#make sure time handling is correctly. try to make snapshots a year appart and verify that only snapshots mostly 1y old are kept
|
||||||
|
#So in this case we only want to see 2 snapshots of 2011, and none of the 2010's anymore.
|
||||||
with self.subTest("test time checking"):
|
with self.subTest("test time checking"):
|
||||||
with patch('time.strftime', return_value="test-20111111000000"):
|
with mocktime("20111211000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --no-progress".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --no-progress".split(" ")).run())
|
||||||
|
|
||||||
|
with mocktime("20111211000001"):
|
||||||
time_str="20111112000000" #month in the "future"
|
|
||||||
future_timestamp=time_secs=time.mktime(time.strptime(time_str,"%Y%m%d%H%M%S"))
|
|
||||||
with patch('time.time', return_value=future_timestamp):
|
|
||||||
with patch('time.strftime', return_value="test-20111111000001"):
|
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --keep-source 1y1y --keep-target 1d1y --no-progress".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --allow-empty --verbose --keep-source 1y1y --keep-target 1d1y --no-progress".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
test_source1
|
test_source1
|
||||||
test_source1/fs1
|
test_source1/fs1
|
||||||
test_source1/fs1@test-20111111000000
|
test_source1/fs1@test-20111211000000
|
||||||
test_source1/fs1@test-20111111000001
|
test_source1/fs1@test-20111211000001
|
||||||
test_source1/fs1/sub
|
test_source1/fs1/sub
|
||||||
test_source1/fs1/sub@test-20111111000000
|
test_source1/fs1/sub@test-20111211000000
|
||||||
test_source1/fs1/sub@test-20111111000001
|
test_source1/fs1/sub@test-20111211000001
|
||||||
test_source2
|
test_source2
|
||||||
test_source2/fs2
|
test_source2/fs2
|
||||||
test_source2/fs2/sub
|
test_source2/fs2/sub
|
||||||
test_source2/fs2/sub@test-20111111000000
|
test_source2/fs2/sub@test-20111211000000
|
||||||
test_source2/fs2/sub@test-20111111000001
|
test_source2/fs2/sub@test-20111211000001
|
||||||
test_source2/fs3
|
test_source2/fs3
|
||||||
test_source2/fs3/sub
|
test_source2/fs3/sub
|
||||||
test_target1
|
test_target1
|
||||||
test_target1/test_source1
|
test_target1/test_source1
|
||||||
test_target1/test_source1/fs1
|
test_target1/test_source1/fs1
|
||||||
test_target1/test_source1/fs1@test-20111111000000
|
test_target1/test_source1/fs1@test-20111211000000
|
||||||
test_target1/test_source1/fs1@test-20111111000001
|
test_target1/test_source1/fs1@test-20111211000001
|
||||||
test_target1/test_source1/fs1/sub
|
test_target1/test_source1/fs1/sub
|
||||||
test_target1/test_source1/fs1/sub@test-20111111000000
|
test_target1/test_source1/fs1/sub@test-20111211000000
|
||||||
test_target1/test_source1/fs1/sub@test-20111111000001
|
test_target1/test_source1/fs1/sub@test-20111211000001
|
||||||
test_target1/test_source2
|
test_target1/test_source2
|
||||||
test_target1/test_source2/fs2
|
test_target1/test_source2/fs2
|
||||||
test_target1/test_source2/fs2/sub
|
test_target1/test_source2/fs2/sub
|
||||||
test_target1/test_source2/fs2/sub@test-20111111000000
|
test_target1/test_source2/fs2/sub@test-20111211000000
|
||||||
test_target1/test_source2/fs2/sub@test-20111111000001
|
test_target1/test_source2/fs2/sub@test-20111211000001
|
||||||
""")
|
""")
|
||||||
|
|
||||||
def test_ignore_othersnaphots(self):
|
def test_ignore_othersnaphots(self):
|
||||||
@ -216,7 +213,7 @@ test_target1/test_source2/fs2/sub@test-20111111000001
|
|||||||
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
|
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
|
||||||
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
|
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -251,7 +248,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
|
r=shelltest("zfs snapshot test_source1/fs1@othersimple")
|
||||||
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
|
r=shelltest("zfs snapshot test_source1/fs1@otherdate-20001111000000")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --other-snapshots".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --other-snapshots".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -286,7 +283,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
|
|
||||||
def test_nosnapshot(self):
|
def test_nosnapshot(self):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --no-progress".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-snapshot --no-progress".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -310,7 +307,7 @@ test_target1/test_source2/fs2
|
|||||||
|
|
||||||
def test_nosend(self):
|
def test_nosend(self):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-send --no-progress".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-send --no-progress".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -333,7 +330,7 @@ test_target1
|
|||||||
def test_ignorereplicated(self):
|
def test_ignorereplicated(self):
|
||||||
r=shelltest("zfs snapshot test_source1/fs1@otherreplication")
|
r=shelltest("zfs snapshot test_source1/fs1@otherreplication")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --ignore-replicated".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --ignore-replicated".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -362,7 +359,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
|
|
||||||
def test_noholds(self):
|
def test_noholds(self):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds --no-progress".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --no-holds --no-progress".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs get -r userrefs test_source1 test_source2 test_target1")
|
r=shelltest("zfs get -r userrefs test_source1 test_source2 test_target1")
|
||||||
@ -394,7 +391,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000 userrefs 0 -
|
|||||||
|
|
||||||
def test_strippath(self):
|
def test_strippath(self):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --strip-path=1 --no-progress".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --verbose --strip-path=1 --no-progress".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -437,10 +434,10 @@ test_target1/fs2/sub@test-20101111000000
|
|||||||
|
|
||||||
r=shelltest("zfs set refreservation=1M test_source1/fs1")
|
r=shelltest("zfs set refreservation=1M test_source1/fs1")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --clear-refreservation".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --clear-refreservation".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs get refreservation -r test_source1 test_source2 test_target1")
|
r=shelltest("zfs get -r refreservation test_source1 test_source2 test_target1")
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
NAME PROPERTY VALUE SOURCE
|
NAME PROPERTY VALUE SOURCE
|
||||||
test_source1 refreservation none default
|
test_source1 refreservation none default
|
||||||
@ -475,10 +472,10 @@ test_target1/test_source2/fs2/sub@test-20101111000000 refreservation -
|
|||||||
self.skipTest("This zfs-userspace version doesnt support -o")
|
self.skipTest("This zfs-userspace version doesnt support -o")
|
||||||
|
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --clear-mountpoint --debug".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --clear-mountpoint --debug".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs get canmount -r test_source1 test_source2 test_target1")
|
r=shelltest("zfs get -r canmount test_source1 test_source2 test_target1")
|
||||||
self.assertMultiLineEqual(r,"""
|
self.assertMultiLineEqual(r,"""
|
||||||
NAME PROPERTY VALUE SOURCE
|
NAME PROPERTY VALUE SOURCE
|
||||||
test_source1 canmount on default
|
test_source1 canmount on default
|
||||||
@ -493,13 +490,13 @@ test_source2/fs2/sub@test-20101111000000 canmount - -
|
|||||||
test_source2/fs3 canmount on default
|
test_source2/fs3 canmount on default
|
||||||
test_source2/fs3/sub canmount on default
|
test_source2/fs3/sub canmount on default
|
||||||
test_target1 canmount on default
|
test_target1 canmount on default
|
||||||
test_target1/test_source1 canmount on default
|
test_target1/test_source1 canmount off local
|
||||||
test_target1/test_source1/fs1 canmount noauto local
|
test_target1/test_source1/fs1 canmount noauto local
|
||||||
test_target1/test_source1/fs1@test-20101111000000 canmount - -
|
test_target1/test_source1/fs1@test-20101111000000 canmount - -
|
||||||
test_target1/test_source1/fs1/sub canmount noauto local
|
test_target1/test_source1/fs1/sub canmount noauto local
|
||||||
test_target1/test_source1/fs1/sub@test-20101111000000 canmount - -
|
test_target1/test_source1/fs1/sub@test-20101111000000 canmount - -
|
||||||
test_target1/test_source2 canmount on default
|
test_target1/test_source2 canmount off local
|
||||||
test_target1/test_source2/fs2 canmount on default
|
test_target1/test_source2/fs2 canmount off local
|
||||||
test_target1/test_source2/fs2/sub canmount noauto local
|
test_target1/test_source2/fs2/sub canmount noauto local
|
||||||
test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
||||||
""")
|
""")
|
||||||
@ -508,18 +505,17 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
|||||||
def test_rollback(self):
|
def test_rollback(self):
|
||||||
|
|
||||||
#initial backup
|
#initial backup
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
#make change
|
#make change
|
||||||
r=shelltest("zfs mount test_target1/test_source1/fs1")
|
|
||||||
r=shelltest("touch /test_target1/test_source1/fs1/change.txt")
|
r=shelltest("touch /test_target1/test_source1/fs1/change.txt")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
#should fail (busy)
|
#should fail (busy)
|
||||||
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000002"):
|
with mocktime("20101111000002"):
|
||||||
#rollback, should succeed
|
#rollback, should succeed
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --rollback".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --rollback".split(" ")).run())
|
||||||
|
|
||||||
@ -527,36 +523,35 @@ test_target1/test_source2/fs2/sub@test-20101111000000 canmount - -
|
|||||||
def test_destroyincompat(self):
|
def test_destroyincompat(self):
|
||||||
|
|
||||||
#initial backup
|
#initial backup
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
#add multiple compatible snapshot (written is still 0)
|
#add multiple compatible snapshot (written is still 0)
|
||||||
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible1")
|
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible1")
|
||||||
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible2")
|
r=shelltest("zfs snapshot test_target1/test_source1/fs1@compatible2")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
#should be ok, is compatible
|
#should be ok, is compatible
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
#add incompatible snapshot by changing and snapshotting
|
#add incompatible snapshot by changing and snapshotting
|
||||||
r=shelltest("zfs mount test_target1/test_source1/fs1")
|
|
||||||
r=shelltest("touch /test_target1/test_source1/fs1/change.txt")
|
r=shelltest("touch /test_target1/test_source1/fs1/change.txt")
|
||||||
r=shelltest("zfs snapshot test_target1/test_source1/fs1@incompatible1")
|
r=shelltest("zfs snapshot test_target1/test_source1/fs1@incompatible1")
|
||||||
|
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000002"):
|
with mocktime("20101111000002"):
|
||||||
#--test should fail, now incompatible
|
#--test should fail, now incompatible
|
||||||
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --test".split(" ")).run())
|
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --test".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000002"):
|
with mocktime("20101111000002"):
|
||||||
#should fail, now incompatible
|
#should fail, now incompatible
|
||||||
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
self.assertTrue(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000003"):
|
with mocktime("20101111000003"):
|
||||||
#--test should succeed by destroying incompatibles
|
#--test should succeed by destroying incompatibles
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible --test".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible --test".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000003"):
|
with mocktime("20101111000003"):
|
||||||
#should succeed by destroying incompatibles
|
#should succeed by destroying incompatibles
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible".split(" ")).run())
|
||||||
|
|
||||||
@ -594,13 +589,13 @@ test_target1/test_source2/fs2/sub@test-20101111000003
|
|||||||
|
|
||||||
#test all ssh directions
|
#test all ssh directions
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-source localhost --exclude-received".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-source localhost --exclude-received".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-target localhost --exclude-received".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-target localhost --exclude-received".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000002"):
|
with mocktime("20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-source localhost --ssh-target localhost".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --ssh-source localhost --ssh-target localhost".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
@ -645,7 +640,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
|||||||
def test_minchange(self):
|
def test_minchange(self):
|
||||||
|
|
||||||
#initial
|
#initial
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
|
||||||
|
|
||||||
#make small change, use umount to reflect the changes immediately
|
#make small change, use umount to reflect the changes immediately
|
||||||
@ -655,7 +650,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
|||||||
|
|
||||||
|
|
||||||
#too small change, takes no snapshots
|
#too small change, takes no snapshots
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
|
||||||
|
|
||||||
#make big change
|
#make big change
|
||||||
@ -663,7 +658,7 @@ test_target1/test_source2/fs2/sub@test-20101111000002
|
|||||||
r=shelltest("zfs umount test_source1/fs1; zfs mount test_source1/fs1")
|
r=shelltest("zfs umount test_source1/fs1; zfs mount test_source1/fs1")
|
||||||
|
|
||||||
#bigger change, should take snapshot
|
#bigger change, should take snapshot
|
||||||
with patch('time.strftime', return_value="test-20101111000002"):
|
with mocktime("20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --min-change 100000".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -696,7 +691,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
def test_test(self):
|
def test_test(self):
|
||||||
|
|
||||||
#initial
|
#initial
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -713,12 +708,12 @@ test_target1
|
|||||||
""")
|
""")
|
||||||
|
|
||||||
#actual make initial backup
|
#actual make initial backup
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
#test incremental
|
#test incremental
|
||||||
with patch('time.strftime', return_value="test-20101111000002"):
|
with mocktime("20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --allow-empty --verbose --test".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --allow-empty --verbose --test".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -754,7 +749,7 @@ test_target1/test_source2/fs2/sub@test-20101111000001
|
|||||||
shelltest("zfs create test_target1/test_source1")
|
shelltest("zfs create test_target1/test_source1")
|
||||||
shelltest("zfs send test_source1/fs1@migrate1| zfs recv test_target1/test_source1/fs1")
|
shelltest("zfs send test_source1/fs1@migrate1| zfs recv test_target1/test_source1/fs1")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -787,15 +782,15 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
def test_keep0(self):
|
def test_keep0(self):
|
||||||
"""test if keep-source=0 and keep-target=0 dont delete common snapshot and break backup"""
|
"""test if keep-source=0 and keep-target=0 dont delete common snapshot and break backup"""
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=0 --keep-target=0".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=0 --keep-target=0".split(" ")).run())
|
||||||
|
|
||||||
#make snapshot, shouldnt delete 0
|
#make snapshot, shouldnt delete 0
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
#make snapshot 2, shouldnt delete 0 since it has holds, but will delete 1 since it has no holds
|
#make snapshot 2, shouldnt delete 0 since it has holds, but will delete 1 since it has no holds
|
||||||
with patch('time.strftime', return_value="test-20101111000002"):
|
with mocktime("20101111000002"):
|
||||||
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
||||||
@ -827,7 +822,7 @@ test_target1/test_source2/fs2/sub@test-20101111000000
|
|||||||
""")
|
""")
|
||||||
|
|
||||||
#make another backup but with no-holds. we should naturally endup with only number 3
|
#make another backup but with no-holds. we should naturally endup with only number 3
|
||||||
with patch('time.strftime', return_value="test-20101111000003"):
|
with mocktime("20101111000003"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=0 --keep-target=0 --no-holds --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --keep-source=0 --keep-target=0 --no-holds --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
||||||
@ -857,7 +852,7 @@ test_target1/test_source2/fs2/sub@test-20101111000003
|
|||||||
|
|
||||||
|
|
||||||
# run with snapshot-only for 4, since we used no-holds, it will delete 3 on the source, breaking the backup
|
# run with snapshot-only for 4, since we used no-holds, it will delete 3 on the source, breaking the backup
|
||||||
with patch('time.strftime', return_value="test-20101111000004"):
|
with mocktime("20101111000004"):
|
||||||
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test --no-progress --verbose --keep-source=0 --keep-target=0 --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
||||||
@ -888,23 +883,28 @@ test_target1/test_source2/fs2/sub@test-20101111000003
|
|||||||
|
|
||||||
def test_progress(self):
|
def test_progress(self):
|
||||||
|
|
||||||
r=shelltest("dd if=/dev/zero of=/test_source1/data.txt bs=200000 count=1")
|
r=shelltest("dd if=/dev/urandom of=/test_source1/data.txt bs=5M count=1")
|
||||||
r = shelltest("zfs snapshot test_source1@test")
|
r = shelltest("zfs snapshot test_source1@test")
|
||||||
|
|
||||||
l=LogConsole(show_verbose=True, show_debug=False, color=False)
|
l=LogConsole(show_verbose=True, show_debug=True, color=False)
|
||||||
n=ZfsNode(utc=False, snapshot_time_format="bla", hold_name="bla", logger=l)
|
n=ZfsNode(utc=False, snapshot_time_format="bla", hold_name="bla", logger=l)
|
||||||
d=ZfsDataset(n,"test_source1@test")
|
d=ZfsDataset(n,"test_source1@test")
|
||||||
|
|
||||||
sp=d.send_pipe([], prev_snapshot=None, resume_token=None, show_progress=True, raw=False, send_pipes=[], send_properties=True, write_embedded=True, zfs_compressed=True)
|
sp=d.send_pipe([], prev_snapshot=None, resume_token=None, show_progress=True, raw=False, send_pipes=[], send_properties=True, write_embedded=True, zfs_compressed=True)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stderr(buf):
|
with redirect_stderr(buf):
|
||||||
try:
|
try:
|
||||||
n.run(["sleep", "2"], inp=sp)
|
|
||||||
|
p=n.run(["mbuffer", "-R1M", "-m4096", "-o" ,"/dev/null"], inp=sp)
|
||||||
|
# p=n.run(["dd", "of=/dev/null"], inp=sp)
|
||||||
|
|
||||||
except:
|
except:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
print(buf.getvalue())
|
print(list(buf.getvalue()))
|
||||||
# correct message?
|
# correct message?
|
||||||
self.assertRegex(buf.getvalue(),".*>>> .*minutes left.*")
|
self.assertRegex(buf.getvalue(),".*>>> .*minutes left.*")
|
||||||
|
@ -10,10 +10,10 @@ class TestZfsAutobackup31(unittest2.TestCase):
|
|||||||
|
|
||||||
def test_no_thinning(self):
|
def test_no_thinning(self):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --keep-target=0 --keep-source=0 --no-thinning".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --keep-target=0 --keep-source=0 --no-thinning".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
@ -54,10 +54,10 @@ test_target1/test_source2/fs2/sub@test-20101111000001
|
|||||||
shelltest("zfs create test_target1/a")
|
shelltest("zfs create test_target1/a")
|
||||||
shelltest("zfs create test_target1/b")
|
shelltest("zfs create test_target1/b")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1/a --no-progress --verbose --debug".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1/a --no-progress --verbose --debug".split(" ")).run())
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(ZfsAutobackup("test test_target1/b --no-progress --verbose".split(" ")).run())
|
self.assertFalse(ZfsAutobackup("test test_target1/b --no-progress --verbose".split(" ")).run())
|
||||||
|
|
||||||
r=shelltest("zfs list -H -o name -r -t snapshot test_target1")
|
r=shelltest("zfs list -H -o name -r -t snapshot test_target1")
|
||||||
@ -75,7 +75,7 @@ test_target1/b/test_target1/a/test_source1/fs1/sub@test-20101111000000
|
|||||||
|
|
||||||
def test_zfs_compressed(self):
|
def test_zfs_compressed(self):
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(
|
self.assertFalse(
|
||||||
ZfsAutobackup("test test_target1 --no-progress --verbose --debug --zfs-compressed".split(" ")).run())
|
ZfsAutobackup("test test_target1 --no-progress --verbose --debug --zfs-compressed".split(" ")).run())
|
||||||
|
|
||||||
@ -84,7 +84,7 @@ test_target1/b/test_target1/a/test_source1/fs1/sub@test-20101111000000
|
|||||||
|
|
||||||
shelltest("zfs set autobackup:test=true test_source1")
|
shelltest("zfs set autobackup:test=true test_source1")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(
|
self.assertFalse(
|
||||||
ZfsAutobackup("test test_target1 --no-progress --verbose --debug --force --strip-path=1".split(" ")).run())
|
ZfsAutobackup("test test_target1 --no-progress --verbose --debug --force --strip-path=1".split(" ")).run())
|
||||||
|
|
||||||
@ -101,13 +101,13 @@ test_target1/fs2/sub@test-20101111000000
|
|||||||
|
|
||||||
shelltest("zfs snapshot -r test_source1@somesnapshot")
|
shelltest("zfs snapshot -r test_source1@somesnapshot")
|
||||||
|
|
||||||
with patch('time.strftime', return_value="test-20101111000000"):
|
with mocktime("20101111000000"):
|
||||||
self.assertFalse(
|
self.assertFalse(
|
||||||
ZfsAutobackup(
|
ZfsAutobackup(
|
||||||
"test test_target1 --verbose --allow-empty --exclude-unchanged=1".split(" ")).run())
|
"test test_target1 --verbose --allow-empty --exclude-unchanged=1".split(" ")).run())
|
||||||
|
|
||||||
#everything should be excluded, but should not return an error (see #190)
|
#everything should be excluded, but should not return an error (see #190)
|
||||||
with patch('time.strftime', return_value="test-20101111000001"):
|
with mocktime("20101111000001"):
|
||||||
self.assertFalse(
|
self.assertFalse(
|
||||||
ZfsAutobackup(
|
ZfsAutobackup(
|
||||||
"test test_target1 --verbose --allow-empty --exclude-unchanged=1".split(" ")).run())
|
"test test_target1 --verbose --allow-empty --exclude-unchanged=1".split(" ")).run())
|
||||||
|
200
tests/test_zfsautobackup32.py
Normal file
200
tests/test_zfsautobackup32.py
Normal file
@ -0,0 +1,200 @@
|
|||||||
|
from basetest import *
|
||||||
|
|
||||||
|
class TestZfsAutobackup32(unittest2.TestCase):
|
||||||
|
"""various new 3.2 features"""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
prepare_zpools()
|
||||||
|
self.longMessage=True
|
||||||
|
|
||||||
|
def test_invalid_common_snapshot(self):
|
||||||
|
|
||||||
|
with mocktime("20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
#create 2 snapshots with the same name, which are invalid as common snapshot
|
||||||
|
shelltest("zfs snapshot test_source1/fs1@invalid")
|
||||||
|
shelltest("zfs snapshot test_target1/test_source1/fs1@invalid")
|
||||||
|
|
||||||
|
with mocktime("20101111000001"):
|
||||||
|
#try the old way (without guid checking), and fail:
|
||||||
|
self.assertEqual(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --no-guid-check".split(" ")).run(),1)
|
||||||
|
#new way should be ok:
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-20101111000000
|
||||||
|
test_source1/fs1@invalid
|
||||||
|
test_source1/fs1@test-20101111000001
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_source1/fs1/sub@test-20101111000001
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_source2/fs2/sub@test-20101111000001
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1@invalid
|
||||||
|
test_target1/test_source1/fs1@test-20101111000001
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000001
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000001
|
||||||
|
""")
|
||||||
|
|
||||||
|
def test_invalid_common_snapshot_with_data(self):
|
||||||
|
|
||||||
|
with mocktime("20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
#create 2 snapshots with the same name, which are invalid as common snapshot
|
||||||
|
shelltest("zfs snapshot test_source1/fs1@invalid")
|
||||||
|
shelltest("touch /test_target1/test_source1/fs1/shouldnotbeHere")
|
||||||
|
shelltest("zfs snapshot test_target1/test_source1/fs1@invalid")
|
||||||
|
|
||||||
|
with mocktime("20101111000001"):
|
||||||
|
#try the old way and fail:
|
||||||
|
self.assertEqual(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --destroy-incompatible --no-guid-check".split(" ")).run(),1)
|
||||||
|
#new way should be ok
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --no-snapshot --destroy-incompatible".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t all "+TEST_POOLS)
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1
|
||||||
|
test_source1/fs1
|
||||||
|
test_source1/fs1@test-20101111000000
|
||||||
|
test_source1/fs1@invalid
|
||||||
|
test_source1/fs1@test-20101111000001
|
||||||
|
test_source1/fs1/sub
|
||||||
|
test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_source1/fs1/sub@test-20101111000001
|
||||||
|
test_source2
|
||||||
|
test_source2/fs2
|
||||||
|
test_source2/fs2/sub
|
||||||
|
test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_source2/fs2/sub@test-20101111000001
|
||||||
|
test_source2/fs3
|
||||||
|
test_source2/fs3/sub
|
||||||
|
test_target1
|
||||||
|
test_target1/test_source1
|
||||||
|
test_target1/test_source1/fs1
|
||||||
|
test_target1/test_source1/fs1@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1@test-20101111000001
|
||||||
|
test_target1/test_source1/fs1/sub
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000000
|
||||||
|
test_target1/test_source1/fs1/sub@test-20101111000001
|
||||||
|
test_target1/test_source2
|
||||||
|
test_target1/test_source2/fs2
|
||||||
|
test_target1/test_source2/fs2/sub
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000000
|
||||||
|
test_target1/test_source2/fs2/sub@test-20101111000001
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
#check consistent mounting behaviour, see issue #112
|
||||||
|
def test_mount_consitency_mounted(self):
|
||||||
|
"""only filesystems that have canmount=on with a mountpoint should be mounted. """
|
||||||
|
|
||||||
|
shelltest("zfs create -V 10M test_source1/fs1/subvol")
|
||||||
|
|
||||||
|
with mocktime("20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs mount |grep -o /test_target1.*")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
/test_target1
|
||||||
|
/test_target1/test_source1/fs1
|
||||||
|
/test_target1/test_source1/fs1/sub
|
||||||
|
/test_target1/test_source2/fs2/sub
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
def test_mount_consitency_unmounted(self):
|
||||||
|
"""only test_target1 should be mounted in this test"""
|
||||||
|
|
||||||
|
shelltest("zfs create -V 10M test_source1/fs1/subvol")
|
||||||
|
|
||||||
|
with mocktime("20101111000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test test_target1 --no-progress --verbose --allow-empty --clear-mountpoint".split(" ")).run())
|
||||||
|
|
||||||
|
r=shelltest("zfs mount |grep -o /test_target1.*")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
/test_target1
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def test_transfer_thinning(self):
|
||||||
|
# test pre/post/during transfer thinning and efficient transfer (no transerring of stuff that gets deleted on target)
|
||||||
|
|
||||||
|
#less output
|
||||||
|
shelltest("zfs set autobackup:test2=true test_source1/fs1/sub")
|
||||||
|
|
||||||
|
# nobody wants this one, will be destroyed before transferring (over a year ago)
|
||||||
|
with mocktime("20000101000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test2 --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
# only target wants this one (monthlys)
|
||||||
|
with mocktime("20010101000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test2 --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
# both want this one (dayly + monthly)
|
||||||
|
# other snapshots should influence the middle one that we actually want.
|
||||||
|
with mocktime("20010201000000"):
|
||||||
|
shelltest("zfs snapshot test_source1/fs1/sub@other1")
|
||||||
|
self.assertFalse(ZfsAutobackup("test2 --allow-empty".split(" ")).run())
|
||||||
|
shelltest("zfs snapshot test_source1/fs1/sub@other2")
|
||||||
|
|
||||||
|
# only source wants this one (dayly)
|
||||||
|
with mocktime("20010202000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("test2 --allow-empty".split(" ")).run())
|
||||||
|
|
||||||
|
#will become common snapshot
|
||||||
|
with OutputIO() as buf:
|
||||||
|
with redirect_stdout(buf):
|
||||||
|
with mocktime("20010203000000"):
|
||||||
|
self.assertFalse(ZfsAutobackup("--keep-source=1d10d --keep-target=1m10m --allow-empty --verbose --clear-mountpoint --other-snapshots test2 test_target1".split(" ")).run())
|
||||||
|
|
||||||
|
|
||||||
|
print(buf.getvalue())
|
||||||
|
self.assertIn(
|
||||||
|
"""
|
||||||
|
[Source] test_source1/fs1/sub@test2-20000101000000: Destroying
|
||||||
|
[Source] test_source1/fs1/sub@test2-20010101000000: -> test_target1/test_source1/fs1/sub (new)
|
||||||
|
[Source] test_source1/fs1/sub@other1: -> test_target1/test_source1/fs1/sub
|
||||||
|
[Source] test_source1/fs1/sub@test2-20010101000000: Destroying
|
||||||
|
[Source] test_source1/fs1/sub@test2-20010201000000: -> test_target1/test_source1/fs1/sub
|
||||||
|
[Source] test_source1/fs1/sub@other2: -> test_target1/test_source1/fs1/sub
|
||||||
|
[Source] test_source1/fs1/sub@test2-20010203000000: -> test_target1/test_source1/fs1/sub
|
||||||
|
""", buf.getvalue())
|
||||||
|
|
||||||
|
|
||||||
|
r=shelltest("zfs list -H -o name -r -t snapshot test_source1 test_target1")
|
||||||
|
self.assertMultiLineEqual(r,"""
|
||||||
|
test_source1/fs1/sub@other1
|
||||||
|
test_source1/fs1/sub@test2-20010201000000
|
||||||
|
test_source1/fs1/sub@other2
|
||||||
|
test_source1/fs1/sub@test2-20010202000000
|
||||||
|
test_source1/fs1/sub@test2-20010203000000
|
||||||
|
test_target1/test_source1/fs1/sub@test2-20010101000000
|
||||||
|
test_target1/test_source1/fs1/sub@other1
|
||||||
|
test_target1/test_source1/fs1/sub@test2-20010201000000
|
||||||
|
test_target1/test_source1/fs1/sub@other2
|
||||||
|
test_target1/test_source1/fs1/sub@test2-20010203000000
|
||||||
|
""")
|
||||||
|
|
||||||
|
|
@ -1,3 +1,5 @@
|
|||||||
|
from os.path import exists
|
||||||
|
|
||||||
from basetest import *
|
from basetest import *
|
||||||
from zfs_autobackup.BlockHasher import BlockHasher
|
from zfs_autobackup.BlockHasher import BlockHasher
|
||||||
|
|
||||||
@ -9,6 +11,10 @@ class TestZfsCheck(unittest2.TestCase):
|
|||||||
|
|
||||||
|
|
||||||
def test_volume(self):
|
def test_volume(self):
|
||||||
|
|
||||||
|
if exists("/.dockerenv"):
|
||||||
|
self.skipTest("FIXME: zfscheck volumes not supported in docker yet")
|
||||||
|
|
||||||
prepare_zpools()
|
prepare_zpools()
|
||||||
|
|
||||||
shelltest("zfs create -V200M test_source1/vol")
|
shelltest("zfs create -V200M test_source1/vol")
|
||||||
@ -50,7 +56,7 @@ class TestZfsCheck(unittest2.TestCase):
|
|||||||
shelltest("mkfifo /test_source1/f")
|
shelltest("mkfifo /test_source1/f")
|
||||||
|
|
||||||
shelltest("zfs snapshot test_source1@test")
|
shelltest("zfs snapshot test_source1@test")
|
||||||
|
ZfsCheck("test_source1@test --debug".split(" "), print_arguments=False).run()
|
||||||
with self.subTest("Generate"):
|
with self.subTest("Generate"):
|
||||||
with OutputIO() as buf:
|
with OutputIO() as buf:
|
||||||
with redirect_stdout(buf):
|
with redirect_stdout(buf):
|
||||||
@ -178,15 +184,16 @@ whole_whole2_partial 0 309ffffba2e1977d12f3b7469971f30d28b94bd8
|
|||||||
shelltest("cp tests/data/whole /test_source1/testfile")
|
shelltest("cp tests/data/whole /test_source1/testfile")
|
||||||
shelltest("zfs snapshot test_source1@test")
|
shelltest("zfs snapshot test_source1@test")
|
||||||
|
|
||||||
#breaks pipe when grep exists:
|
#breaks pipe when head exists
|
||||||
#important to use --debug, since that generates extra output which would be problematic if we didnt do correct SIGPIPE handling
|
#important to use --debug, since that generates extra output which would be problematic if we didnt do correct SIGPIPE handling
|
||||||
shelltest("python -m zfs_autobackup.ZfsCheck test_source1@test --debug | grep -m1 'Hashing tree'")
|
shelltest("python -m zfs_autobackup.ZfsCheck test_source1@test --debug | head -n1")
|
||||||
# time.sleep(5)
|
|
||||||
|
|
||||||
#should NOT be mounted anymore if cleanup went ok:
|
#should NOT be mounted anymore if cleanup went ok:
|
||||||
self.assertNotRegex(shelltest("mount"), "test_source1@test")
|
self.assertNotRegex(shelltest("mount"), "test_source1@test")
|
||||||
|
|
||||||
def test_brokenpipe_cleanup_volume(self):
|
def test_brokenpipe_cleanup_volume(self):
|
||||||
|
if exists("/.dockerenv"):
|
||||||
|
self.skipTest("FIXME: zfscheck volumes not supported in docker yet")
|
||||||
|
|
||||||
prepare_zpools()
|
prepare_zpools()
|
||||||
shelltest("zfs create -V200M test_source1/vol")
|
shelltest("zfs create -V200M test_source1/vol")
|
||||||
@ -194,7 +201,7 @@ whole_whole2_partial 0 309ffffba2e1977d12f3b7469971f30d28b94bd8
|
|||||||
|
|
||||||
#breaks pipe when grep exists:
|
#breaks pipe when grep exists:
|
||||||
#important to use --debug, since that generates extra output which would be problematic if we didnt do correct SIGPIPE handling
|
#important to use --debug, since that generates extra output which would be problematic if we didnt do correct SIGPIPE handling
|
||||||
shelltest("python -m zfs_autobackup.ZfsCheck test_source1/vol@test --debug | grep -m1 'Hashing file'")
|
shelltest("python -m zfs_autobackup.ZfsCheck test_source1/vol@test --debug| grep -m1 'Hashing file'")
|
||||||
# time.sleep(1)
|
# time.sleep(1)
|
||||||
|
|
||||||
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
r = shelltest("zfs list -H -o name -r -t all " + TEST_POOLS)
|
||||||
|
1
tests/tests
Symbolic link
1
tests/tests
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
.
|
42
tests/tests_docker
Executable file
42
tests/tests_docker
Executable file
@ -0,0 +1,42 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
#NOTE: This script will started inside the test docker container
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
if ! [ -e /.dockerenv ]; then
|
||||||
|
echo "only run this script inside a docker container!"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! [ -e /dev/ram0 ]; then
|
||||||
|
echo "Please load this module outside container:" >&2
|
||||||
|
echo "sudo modprobe brd rd_size=512000" >&2
|
||||||
|
exit 1
|
||||||
|
|
||||||
|
fi
|
||||||
|
|
||||||
|
#start sshd and other stuff
|
||||||
|
ssh-keygen -A
|
||||||
|
/usr/sbin/sshd
|
||||||
|
udevd -d
|
||||||
|
|
||||||
|
|
||||||
|
#config ssh
|
||||||
|
if ! [ -e /root/.ssh/id_rsa ]; then
|
||||||
|
ssh-keygen -t rsa -f /root/.ssh/id_rsa -P ''
|
||||||
|
fi
|
||||||
|
|
||||||
|
cat >> ~/.ssh/config <<EOF
|
||||||
|
Host *
|
||||||
|
addkeystoagent yes
|
||||||
|
controlpath ~/.ssh/control-master-%r@%h:%p
|
||||||
|
controlmaster auto
|
||||||
|
controlpersist 3600
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
|
||||||
|
ssh -oStrictHostKeyChecking=no localhost 'echo SSH OK'
|
||||||
|
|
||||||
|
cd /app
|
||||||
|
python -m unittest discover /app/tests -vvvvf $@
|
@ -10,7 +10,7 @@ class CliBase(object):
|
|||||||
Overridden in subclasses that add stuff for the specific programs."""
|
Overridden in subclasses that add stuff for the specific programs."""
|
||||||
|
|
||||||
# also used by setup.py
|
# also used by setup.py
|
||||||
VERSION = "3.2"
|
VERSION = "3.3"
|
||||||
HEADER = "{} v{} - (c)2022 E.H.Eefting (edwin@datux.nl)".format(os.path.basename(sys.argv[0]), VERSION)
|
HEADER = "{} v{} - (c)2022 E.H.Eefting (edwin@datux.nl)".format(os.path.basename(sys.argv[0]), VERSION)
|
||||||
|
|
||||||
def __init__(self, argv, print_arguments=True):
|
def __init__(self, argv, print_arguments=True):
|
||||||
|
@ -36,7 +36,7 @@ class LogConsole:
|
|||||||
def warning(self, txt):
|
def warning(self, txt):
|
||||||
self.clear_progress()
|
self.clear_progress()
|
||||||
if self.colorama:
|
if self.colorama:
|
||||||
print(colorama.Fore.YELLOW + colorama.Style.BRIGHT + " NOTE: " + txt + colorama.Style.RESET_ALL)
|
print(colorama.Fore.YELLOW + colorama.Style.NORMAL + " NOTE: " + txt + colorama.Style.RESET_ALL)
|
||||||
else:
|
else:
|
||||||
print(" NOTE: " + txt)
|
print(" NOTE: " + txt)
|
||||||
sys.stdout.flush()
|
sys.stdout.flush()
|
||||||
|
@ -1,4 +1,3 @@
|
|||||||
import time
|
|
||||||
|
|
||||||
from .ThinnerRule import ThinnerRule
|
from .ThinnerRule import ThinnerRule
|
||||||
|
|
||||||
@ -37,7 +36,7 @@ class Thinner:
|
|||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
def thin(self, objects, keep_objects=None, now=None):
|
def thin(self, objects, keep_objects, now):
|
||||||
"""thin list of objects with current schedule rules. objects: list of
|
"""thin list of objects with current schedule rules. objects: list of
|
||||||
objects to thin. every object should have timestamp attribute.
|
objects to thin. every object should have timestamp attribute.
|
||||||
|
|
||||||
@ -49,8 +48,6 @@ class Thinner:
|
|||||||
now: if specified, use this time as current time
|
now: if specified, use this time as current time
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if not keep_objects:
|
|
||||||
keep_objects = []
|
|
||||||
|
|
||||||
# always keep a number of the last objets?
|
# always keep a number of the last objets?
|
||||||
if self.always_keep:
|
if self.always_keep:
|
||||||
@ -68,9 +65,6 @@ class Thinner:
|
|||||||
for rule in self.rules:
|
for rule in self.rules:
|
||||||
time_blocks[rule.period] = {}
|
time_blocks[rule.period] = {}
|
||||||
|
|
||||||
if not now:
|
|
||||||
now = int(time.time())
|
|
||||||
|
|
||||||
keeps = []
|
keeps = []
|
||||||
removes = []
|
removes = []
|
||||||
|
|
||||||
|
@ -1,7 +1,9 @@
|
|||||||
import argparse
|
import argparse
|
||||||
|
import re
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
from .CliBase import CliBase
|
from .CliBase import CliBase
|
||||||
|
from .util import datetime_now
|
||||||
|
|
||||||
|
|
||||||
class ZfsAuto(CliBase):
|
class ZfsAuto(CliBase):
|
||||||
@ -46,8 +48,8 @@ class ZfsAuto(CliBase):
|
|||||||
self.verbose("NOTE: Source and target are on the same host, excluding target-path from selection.")
|
self.verbose("NOTE: Source and target are on the same host, excluding target-path from selection.")
|
||||||
self.exclude_paths.append(args.target_path)
|
self.exclude_paths.append(args.target_path)
|
||||||
else:
|
else:
|
||||||
if not args.exclude_received:
|
if not args.exclude_received and not args.include_received:
|
||||||
self.verbose("NOTE: Source and target are on the same host, adding --exclude-received to commandline.")
|
self.verbose("NOTE: Source and target are on the same host, adding --exclude-received to commandline. (use --include-received to overrule)")
|
||||||
args.exclude_received = True
|
args.exclude_received = True
|
||||||
|
|
||||||
if args.test:
|
if args.test:
|
||||||
@ -58,7 +60,11 @@ class ZfsAuto(CliBase):
|
|||||||
self.snapshot_time_format = args.snapshot_format.format(args.backup_name)
|
self.snapshot_time_format = args.snapshot_format.format(args.backup_name)
|
||||||
self.hold_name = args.hold_format.format(args.backup_name)
|
self.hold_name = args.hold_format.format(args.backup_name)
|
||||||
|
|
||||||
|
dt = datetime_now(args.utc)
|
||||||
|
|
||||||
self.verbose("")
|
self.verbose("")
|
||||||
|
self.verbose("Current time {} : {}".format(args.utc and "UTC" or " ", dt.strftime("%Y-%m-%d %H:%M:%S")))
|
||||||
|
|
||||||
self.verbose("Selecting dataset property : {}".format(self.property_name))
|
self.verbose("Selecting dataset property : {}".format(self.property_name))
|
||||||
self.verbose("Snapshot format : {}".format(self.snapshot_time_format))
|
self.verbose("Snapshot format : {}".format(self.snapshot_time_format))
|
||||||
self.verbose("Timezone : {}".format("UTC" if args.utc else "Local"))
|
self.verbose("Timezone : {}".format("UTC" if args.utc else "Local"))
|
||||||
@ -103,6 +109,17 @@ class ZfsAuto(CliBase):
|
|||||||
group.add_argument('--exclude-received', action='store_true',
|
group.add_argument('--exclude-received', action='store_true',
|
||||||
help='Exclude datasets that have the origin of their autobackup: property as "received". '
|
help='Exclude datasets that have the origin of their autobackup: property as "received". '
|
||||||
'This can avoid recursive replication between two backup partners.')
|
'This can avoid recursive replication between two backup partners.')
|
||||||
|
group.add_argument('--include-received', action='store_true',
|
||||||
|
help=argparse.SUPPRESS)
|
||||||
|
|
||||||
|
|
||||||
|
def regex_argument_type(input_line):
|
||||||
|
"""Parses regex arguments into re.Pattern objects"""
|
||||||
|
try:
|
||||||
|
return re.compile(input_line)
|
||||||
|
except:
|
||||||
|
raise ValueError("Could not parse argument '{}' as a regular expression".format(input_line))
|
||||||
|
group.add_argument('--exclude-snapshot-pattern', action='append', default=[], type=regex_argument_type, help="Regular expression to match snapshots that will be ignored.")
|
||||||
|
|
||||||
return parser
|
return parser
|
||||||
|
|
||||||
|
@ -1,9 +1,7 @@
|
|||||||
import time
|
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
from datetime import datetime
|
|
||||||
from signal import signal, SIGPIPE
|
from signal import signal, SIGPIPE
|
||||||
from .util import output_redir, sigpipe_handler
|
from .util import output_redir, sigpipe_handler, datetime_now
|
||||||
|
|
||||||
from .ZfsAuto import ZfsAuto
|
from .ZfsAuto import ZfsAuto
|
||||||
|
|
||||||
@ -33,8 +31,8 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
if args.allow_empty:
|
if args.allow_empty:
|
||||||
args.min_change = 0
|
args.min_change = 0
|
||||||
|
|
||||||
if args.destroy_incompatible:
|
# if args.destroy_incompatible:
|
||||||
args.rollback = True
|
# args.rollback = True
|
||||||
|
|
||||||
if args.resume:
|
if args.resume:
|
||||||
self.warning("The --resume option isn't needed anymore (it's autodetected now)")
|
self.warning("The --resume option isn't needed anymore (it's autodetected now)")
|
||||||
@ -72,6 +70,8 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
help='Send over other snapshots as well, not just the ones created by this tool.')
|
help='Send over other snapshots as well, not just the ones created by this tool.')
|
||||||
group.add_argument('--set-snapshot-properties', metavar='PROPERTY=VALUE,...', type=str,
|
group.add_argument('--set-snapshot-properties', metavar='PROPERTY=VALUE,...', type=str,
|
||||||
help='List of properties to set on the snapshot.')
|
help='List of properties to set on the snapshot.')
|
||||||
|
group.add_argument('--no-guid-check', action='store_true',
|
||||||
|
help='Dont check guid of common snapshots. (faster)')
|
||||||
|
|
||||||
|
|
||||||
group = parser.add_argument_group("Transfer options")
|
group = parser.add_argument_group("Transfer options")
|
||||||
@ -97,7 +97,7 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
group.add_argument('--force', '-F', action='store_true',
|
group.add_argument('--force', '-F', action='store_true',
|
||||||
help='Use zfs -F option to force overwrite/rollback. (Useful with --strip-path=1, but use with care)')
|
help='Use zfs -F option to force overwrite/rollback. (Useful with --strip-path=1, but use with care)')
|
||||||
group.add_argument('--destroy-incompatible', action='store_true',
|
group.add_argument('--destroy-incompatible', action='store_true',
|
||||||
help='Destroy incompatible snapshots on target. Use with care! (implies --rollback)')
|
help='Destroy incompatible snapshots on target. Use with care! (also does rollback of dataset)')
|
||||||
group.add_argument('--ignore-transfer-errors', action='store_true',
|
group.add_argument('--ignore-transfer-errors', action='store_true',
|
||||||
help='Ignore transfer errors (still checks if received filesystem exists. useful for '
|
help='Ignore transfer errors (still checks if received filesystem exists. useful for '
|
||||||
'acltype errors)')
|
'acltype errors)')
|
||||||
@ -119,6 +119,8 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
help='Limit data transfer rate in Bytes/sec (e.g. 128K. requires mbuffer.)')
|
help='Limit data transfer rate in Bytes/sec (e.g. 128K. requires mbuffer.)')
|
||||||
group.add_argument('--buffer', metavar='SIZE', default=None,
|
group.add_argument('--buffer', metavar='SIZE', default=None,
|
||||||
help='Add zfs send and recv buffers to smooth out IO bursts. (e.g. 128M. requires mbuffer)')
|
help='Add zfs send and recv buffers to smooth out IO bursts. (e.g. 128M. requires mbuffer)')
|
||||||
|
parser.add_argument('--buffer-chunk-size', metavar="BUFFERCHUNKSIZE", default=None,
|
||||||
|
help='Tune chunk size when mbuffer is used. (requires mbuffer.)')
|
||||||
group.add_argument('--send-pipe', metavar="COMMAND", default=[], action='append',
|
group.add_argument('--send-pipe', metavar="COMMAND", default=[], action='append',
|
||||||
help='pipe zfs send output through COMMAND (can be used multiple times)')
|
help='pipe zfs send output through COMMAND (can be used multiple times)')
|
||||||
group.add_argument('--recv-pipe', metavar="COMMAND", default=[], action='append',
|
group.add_argument('--recv-pipe', metavar="COMMAND", default=[], action='append',
|
||||||
@ -142,7 +144,10 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
|
|
||||||
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
|
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
|
||||||
def thin_missing_targets(self, target_dataset, used_target_datasets):
|
def thin_missing_targets(self, target_dataset, used_target_datasets):
|
||||||
"""thin target datasets that are missing on the source."""
|
"""thin target datasets that are missing on the source.
|
||||||
|
:type used_target_datasets: list[ZfsDataset]
|
||||||
|
:type target_dataset: ZfsDataset
|
||||||
|
"""
|
||||||
|
|
||||||
self.debug("Thinning obsolete datasets")
|
self.debug("Thinning obsolete datasets")
|
||||||
missing_datasets = [dataset for dataset in target_dataset.recursive_datasets if
|
missing_datasets = [dataset for dataset in target_dataset.recursive_datasets if
|
||||||
@ -150,6 +155,7 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
|
|
||||||
count = 0
|
count = 0
|
||||||
for dataset in missing_datasets:
|
for dataset in missing_datasets:
|
||||||
|
self.debug("analyse missing {}".format(dataset))
|
||||||
|
|
||||||
count = count + 1
|
count = count + 1
|
||||||
if self.args.progress:
|
if self.args.progress:
|
||||||
@ -167,7 +173,11 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
|
|
||||||
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
|
# NOTE: this method also uses self.args. args that need extra processing are passed as function parameters:
|
||||||
def destroy_missing_targets(self, target_dataset, used_target_datasets):
|
def destroy_missing_targets(self, target_dataset, used_target_datasets):
|
||||||
"""destroy target datasets that are missing on the source and that meet the requirements"""
|
"""destroy target datasets that are missing on the source and that meet the requirements
|
||||||
|
:type used_target_datasets: list[ZfsDataset]
|
||||||
|
:type target_dataset: ZfsDataset
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
self.debug("Destroying obsolete datasets")
|
self.debug("Destroying obsolete datasets")
|
||||||
|
|
||||||
@ -193,7 +203,7 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
else:
|
else:
|
||||||
# past the deadline?
|
# past the deadline?
|
||||||
deadline_ttl = ThinnerRule("0s" + self.args.destroy_missing).ttl
|
deadline_ttl = ThinnerRule("0s" + self.args.destroy_missing).ttl
|
||||||
now = int(time.time())
|
now = datetime_now(self.args.utc).timestamp()
|
||||||
if dataset.our_snapshots[-1].timestamp + deadline_ttl > now:
|
if dataset.our_snapshots[-1].timestamp + deadline_ttl > now:
|
||||||
dataset.verbose("Destroy missing: Waiting for deadline.")
|
dataset.verbose("Destroy missing: Waiting for deadline.")
|
||||||
else:
|
else:
|
||||||
@ -234,11 +244,22 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
"""determine the zfs send pipe"""
|
"""determine the zfs send pipe"""
|
||||||
|
|
||||||
ret = []
|
ret = []
|
||||||
|
_mbuffer = False
|
||||||
|
_buffer = "16M"
|
||||||
|
_cs = "128k"
|
||||||
|
_rate = False
|
||||||
|
|
||||||
# IO buffer
|
# IO buffer
|
||||||
if self.args.buffer:
|
if self.args.buffer:
|
||||||
logger("zfs send buffer : {}".format(self.args.buffer))
|
logger("zfs send buffer : {}".format(self.args.buffer))
|
||||||
ret.extend([ExecuteNode.PIPE, "mbuffer", "-q", "-s128k", "-m" + self.args.buffer])
|
_mbuffer = True
|
||||||
|
_buffer = self.args.buffer
|
||||||
|
|
||||||
|
# IO chunk size
|
||||||
|
if self.args.buffer_chunk_size:
|
||||||
|
logger("zfs send chunk size : {}".format(self.args.buffer_chunk_size))
|
||||||
|
_mbuffer = True
|
||||||
|
_cs = self.args.buffer_chunk_size
|
||||||
|
|
||||||
# custom pipes
|
# custom pipes
|
||||||
for send_pipe in self.args.send_pipe:
|
for send_pipe in self.args.send_pipe:
|
||||||
@ -256,7 +277,14 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
# transfer rate
|
# transfer rate
|
||||||
if self.args.rate:
|
if self.args.rate:
|
||||||
logger("zfs send transfer rate : {}".format(self.args.rate))
|
logger("zfs send transfer rate : {}".format(self.args.rate))
|
||||||
ret.extend([ExecuteNode.PIPE, "mbuffer", "-q", "-s128k", "-m16M", "-R" + self.args.rate])
|
_mbuffer = True
|
||||||
|
_rate = self.args.rate
|
||||||
|
|
||||||
|
if _mbuffer:
|
||||||
|
cmd = [ExecuteNode.PIPE, "mbuffer", "-q", "-s{}".format(_cs), "-m{}".format(_buffer)]
|
||||||
|
if _rate:
|
||||||
|
cmd.append("-R{}".format(self.args.rate))
|
||||||
|
ret.extend(cmd)
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
@ -278,11 +306,19 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
logger("zfs recv custom pipe : {}".format(recv_pipe))
|
logger("zfs recv custom pipe : {}".format(recv_pipe))
|
||||||
|
|
||||||
# IO buffer
|
# IO buffer
|
||||||
if self.args.buffer:
|
if self.args.buffer or self.args.buffer_chunk_size:
|
||||||
|
_cs = "128k"
|
||||||
|
_buffer = "16M"
|
||||||
# only add second buffer if its usefull. (e.g. non local transfer or other pipes active)
|
# only add second buffer if its usefull. (e.g. non local transfer or other pipes active)
|
||||||
if self.args.ssh_source != None or self.args.ssh_target != None or self.args.recv_pipe or self.args.send_pipe or self.args.compress != None:
|
if self.args.ssh_source != None or self.args.ssh_target != None or self.args.recv_pipe or self.args.send_pipe or self.args.compress != None:
|
||||||
logger("zfs recv buffer : {}".format(self.args.buffer))
|
logger("zfs recv buffer : {}".format(self.args.buffer))
|
||||||
ret.extend(["mbuffer", "-q", "-s128k", "-m" + self.args.buffer, ExecuteNode.PIPE])
|
|
||||||
|
if self.args.buffer_chunk_size:
|
||||||
|
_cs = self.args.buffer_chunk_size
|
||||||
|
if self.args.buffer:
|
||||||
|
_buffer = self.args.buffer
|
||||||
|
|
||||||
|
ret.extend(["mbuffer", "-q", "-s{}".format(_cs), "-m{}".format(_buffer), ExecuteNode.PIPE])
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
@ -342,6 +378,7 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
and target_dataset.parent \
|
and target_dataset.parent \
|
||||||
and target_dataset.parent not in target_datasets \
|
and target_dataset.parent not in target_datasets \
|
||||||
and not target_dataset.parent.exists:
|
and not target_dataset.parent.exists:
|
||||||
|
target_dataset.debug("Creating unmountable parents")
|
||||||
target_dataset.parent.create_filesystem(parents=True)
|
target_dataset.parent.create_filesystem(parents=True)
|
||||||
|
|
||||||
# determine common zpool features (cached, so no problem we call it often)
|
# determine common zpool features (cached, so no problem we call it often)
|
||||||
@ -360,10 +397,8 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
destroy_incompatible=self.args.destroy_incompatible,
|
destroy_incompatible=self.args.destroy_incompatible,
|
||||||
send_pipes=send_pipes, recv_pipes=recv_pipes,
|
send_pipes=send_pipes, recv_pipes=recv_pipes,
|
||||||
decrypt=self.args.decrypt, encrypt=self.args.encrypt,
|
decrypt=self.args.decrypt, encrypt=self.args.encrypt,
|
||||||
zfs_compressed=self.args.zfs_compressed, force=self.args.force)
|
zfs_compressed=self.args.zfs_compressed, force=self.args.force, guid_check=not self.args.no_guid_check)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
# if self.args.progress:
|
|
||||||
# self.clear_progress()
|
|
||||||
|
|
||||||
fail_count = fail_count + 1
|
fail_count = fail_count + 1
|
||||||
source_dataset.error("FAILED: " + str(e))
|
source_dataset.error("FAILED: " + str(e))
|
||||||
@ -371,8 +406,6 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
self.verbose("Debug mode, aborting on first error")
|
self.verbose("Debug mode, aborting on first error")
|
||||||
raise
|
raise
|
||||||
|
|
||||||
# if self.args.progress:
|
|
||||||
# self.clear_progress()
|
|
||||||
|
|
||||||
target_path_dataset = target_node.get_dataset(self.args.target_path)
|
target_path_dataset = target_node.get_dataset(self.args.target_path)
|
||||||
if not self.args.no_thinning:
|
if not self.args.no_thinning:
|
||||||
@ -439,7 +472,8 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
snapshot_time_format=self.snapshot_time_format, hold_name=self.hold_name, logger=self,
|
snapshot_time_format=self.snapshot_time_format, hold_name=self.hold_name, logger=self,
|
||||||
ssh_config=self.args.ssh_config,
|
ssh_config=self.args.ssh_config,
|
||||||
ssh_to=self.args.ssh_source, readonly=self.args.test,
|
ssh_to=self.args.ssh_source, readonly=self.args.test,
|
||||||
debug_output=self.args.debug_output, description=description, thinner=source_thinner)
|
debug_output=self.args.debug_output, description=description, thinner=source_thinner,
|
||||||
|
exclude_snapshot_patterns=self.args.exclude_snapshot_pattern)
|
||||||
|
|
||||||
################# select source datasets
|
################# select source datasets
|
||||||
self.set_title("Selecting")
|
self.set_title("Selecting")
|
||||||
@ -454,8 +488,7 @@ class ZfsAutobackup(ZfsAuto):
|
|||||||
################# snapshotting
|
################# snapshotting
|
||||||
if not self.args.no_snapshot:
|
if not self.args.no_snapshot:
|
||||||
self.set_title("Snapshotting")
|
self.set_title("Snapshotting")
|
||||||
dt = datetime.utcnow() if self.args.utc else datetime.now()
|
snapshot_name = datetime_now(self.args.utc).strftime(self.snapshot_time_format)
|
||||||
snapshot_name = dt.strftime(self.snapshot_time_format)
|
|
||||||
source_node.consistent_snapshot(source_datasets, snapshot_name,
|
source_node.consistent_snapshot(source_datasets, snapshot_name,
|
||||||
min_changed_bytes=self.args.min_change,
|
min_changed_bytes=self.args.min_change,
|
||||||
pre_snapshot_cmds=self.args.pre_snapshot_cmd,
|
pre_snapshot_cmds=self.args.pre_snapshot_cmd,
|
||||||
|
@ -6,7 +6,6 @@ from .ZfsAuto import ZfsAuto
|
|||||||
from .ZfsNode import ZfsNode
|
from .ZfsNode import ZfsNode
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
raise("need to be rewritten to use zfs-check")
|
|
||||||
|
|
||||||
# # try to be as unix compatible as possible, while still having decent performance
|
# # try to be as unix compatible as possible, while still having decent performance
|
||||||
# def compare_trees_find(source_node, source_path, target_node, target_path):
|
# def compare_trees_find(source_node, source_path, target_node, target_path):
|
||||||
@ -87,8 +86,8 @@ def verify_filesystem(source_snapshot, source_mnt, target_snapshot, target_mnt,
|
|||||||
raise(Exception("program errror, unknown method"))
|
raise(Exception("program errror, unknown method"))
|
||||||
|
|
||||||
finally:
|
finally:
|
||||||
source_snapshot.unmount()
|
source_snapshot.unmount(source_mnt)
|
||||||
target_snapshot.unmount()
|
target_snapshot.unmount(target_mnt)
|
||||||
|
|
||||||
|
|
||||||
# def hash_dev(node, dev):
|
# def hash_dev(node, dev):
|
||||||
@ -187,7 +186,7 @@ class ZfsAutoverify(ZfsAuto):
|
|||||||
target_dataset = target_node.get_dataset(target_name)
|
target_dataset = target_node.get_dataset(target_name)
|
||||||
|
|
||||||
# find common snapshots to verify
|
# find common snapshots to verify
|
||||||
source_snapshot = source_dataset.find_common_snapshot(target_dataset)
|
source_snapshot = source_dataset.find_common_snapshot(target_dataset, True)
|
||||||
target_snapshot = target_dataset.find_snapshot(source_snapshot)
|
target_snapshot = target_dataset.find_snapshot(source_snapshot)
|
||||||
|
|
||||||
if source_snapshot is None or target_snapshot is None:
|
if source_snapshot is None or target_snapshot is None:
|
||||||
@ -236,7 +235,8 @@ class ZfsAutoverify(ZfsAuto):
|
|||||||
snapshot_time_format=self.snapshot_time_format, hold_name=self.hold_name, logger=self,
|
snapshot_time_format=self.snapshot_time_format, hold_name=self.hold_name, logger=self,
|
||||||
ssh_config=self.args.ssh_config,
|
ssh_config=self.args.ssh_config,
|
||||||
ssh_to=self.args.ssh_source, readonly=self.args.test,
|
ssh_to=self.args.ssh_source, readonly=self.args.test,
|
||||||
debug_output=self.args.debug_output, description=description)
|
debug_output=self.args.debug_output, description=description,
|
||||||
|
exclude_snapshot_patterns=self.args.exclude_snapshot_pattern)
|
||||||
|
|
||||||
################# select source datasets
|
################# select source datasets
|
||||||
self.set_title("Selecting")
|
self.set_title("Selecting")
|
||||||
@ -307,6 +307,7 @@ class ZfsAutoverify(ZfsAuto):
|
|||||||
def cli():
|
def cli():
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
|
raise(Exception("This program is incomplete, dont use it yet."))
|
||||||
signal(SIGPIPE, sigpipe_handler)
|
signal(SIGPIPE, sigpipe_handler)
|
||||||
failed = ZfsAutoverify(sys.argv[1:], False).run()
|
failed = ZfsAutoverify(sys.argv[1:], False).run()
|
||||||
sys.exit(min(failed,255))
|
sys.exit(min(failed,255))
|
||||||
|
@ -74,7 +74,7 @@ class ZfsCheck(CliBase):
|
|||||||
|
|
||||||
def cleanup_zfs_filesystem(self, snapshot):
|
def cleanup_zfs_filesystem(self, snapshot):
|
||||||
mnt = "/tmp/" + tmp_name()
|
mnt = "/tmp/" + tmp_name()
|
||||||
snapshot.unmount()
|
snapshot.unmount(mnt)
|
||||||
self.debug("Cleaning up temporary mount point")
|
self.debug("Cleaning up temporary mount point")
|
||||||
self.node.run(["rmdir", mnt], hide_errors=True, valid_exitcodes=[])
|
self.node.run(["rmdir", mnt], hide_errors=True, valid_exitcodes=[])
|
||||||
|
|
||||||
|
@ -58,6 +58,13 @@ class ZfsDataset:
|
|||||||
"""
|
"""
|
||||||
self.zfs_node.error("{}: {}".format(self.name, txt))
|
self.zfs_node.error("{}: {}".format(self.name, txt))
|
||||||
|
|
||||||
|
def warning(self, txt):
|
||||||
|
"""
|
||||||
|
Args:
|
||||||
|
:type txt: str
|
||||||
|
"""
|
||||||
|
self.zfs_node.warning("{}: {}".format(self.name, txt))
|
||||||
|
|
||||||
def debug(self, txt):
|
def debug(self, txt):
|
||||||
"""
|
"""
|
||||||
Args:
|
Args:
|
||||||
@ -81,8 +88,8 @@ class ZfsDataset:
|
|||||||
Args:
|
Args:
|
||||||
:type count: int
|
:type count: int
|
||||||
"""
|
"""
|
||||||
components=self.split_path()
|
components = self.split_path()
|
||||||
if count>len(components):
|
if count > len(components):
|
||||||
raise Exception("Trying to strip too much from path ({} items from {})".format(count, self.name))
|
raise Exception("Trying to strip too much from path ({} items from {})".format(count, self.name))
|
||||||
|
|
||||||
return "/".join(components[count:])
|
return "/".join(components[count:])
|
||||||
@ -117,13 +124,27 @@ class ZfsDataset:
|
|||||||
def is_snapshot(self):
|
def is_snapshot(self):
|
||||||
"""true if this dataset is a snapshot"""
|
"""true if this dataset is a snapshot"""
|
||||||
return self.name.find("@") != -1
|
return self.name.find("@") != -1
|
||||||
|
|
||||||
|
@property
|
||||||
|
def is_excluded(self):
|
||||||
|
"""true if this dataset is a snapshot and matches the exclude pattern"""
|
||||||
|
if not self.is_snapshot:
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
for pattern in self.zfs_node.exclude_snapshot_patterns:
|
||||||
|
if pattern.search(self.name) is not None:
|
||||||
|
self.debug("Excluded (path matches snapshot exclude pattern)")
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def is_selected(self, value, source, inherited, exclude_received, exclude_paths, exclude_unchanged):
|
def is_selected(self, value, source, inherited, exclude_received, exclude_paths, exclude_unchanged):
|
||||||
"""determine if dataset should be selected for backup (called from
|
"""determine if dataset should be selected for backup (called from
|
||||||
ZfsNode)
|
ZfsNode)
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
:type exclude_paths: list of str
|
:type exclude_paths: list[str]
|
||||||
:type value: str
|
:type value: str
|
||||||
:type source: str
|
:type source: str
|
||||||
:type inherited: bool
|
:type inherited: bool
|
||||||
@ -189,8 +210,7 @@ class ZfsDataset:
|
|||||||
self.verbose("Selected")
|
self.verbose("Selected")
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
@property
|
||||||
@CachedProperty
|
|
||||||
def parent(self):
|
def parent(self):
|
||||||
"""get zfs-parent of this dataset. for snapshots this means it will get
|
"""get zfs-parent of this dataset. for snapshots this means it will get
|
||||||
the filesystem/volume that it belongs to. otherwise it will return the
|
the filesystem/volume that it belongs to. otherwise it will return the
|
||||||
@ -199,11 +219,12 @@ class ZfsDataset:
|
|||||||
we cache this so everything in the parent that is cached also stays.
|
we cache this so everything in the parent that is cached also stays.
|
||||||
|
|
||||||
returns None if there is no parent.
|
returns None if there is no parent.
|
||||||
|
:rtype: ZfsDataset | None
|
||||||
"""
|
"""
|
||||||
if self.is_snapshot:
|
if self.is_snapshot:
|
||||||
return self.zfs_node.get_dataset(self.filesystem_name)
|
return self.zfs_node.get_dataset(self.filesystem_name)
|
||||||
else:
|
else:
|
||||||
stripped=self.rstrip_path(1)
|
stripped = self.rstrip_path(1)
|
||||||
if stripped:
|
if stripped:
|
||||||
return self.zfs_node.get_dataset(stripped)
|
return self.zfs_node.get_dataset(stripped)
|
||||||
else:
|
else:
|
||||||
@ -250,32 +271,46 @@ class ZfsDataset:
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
@CachedProperty
|
@CachedProperty
|
||||||
|
def exists_check(self):
|
||||||
|
"""check on disk if it exists"""
|
||||||
|
self.debug("Checking if dataset exists")
|
||||||
|
return (len(self.zfs_node.run(tab_split=True, cmd=["zfs", "list", self.name], readonly=True,
|
||||||
|
valid_exitcodes=[0, 1],
|
||||||
|
hide_errors=True)) > 0)
|
||||||
|
|
||||||
|
@property
|
||||||
def exists(self):
|
def exists(self):
|
||||||
"""check if dataset exists. Use force to force a specific value to be
|
"""returns True if dataset should exist.
|
||||||
cached, if you already know. Useful for performance reasons
|
Use force_exists to force a specific value, if you already know. Useful for performance and test reasons
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if self.force_exists is not None:
|
if self.force_exists is not None:
|
||||||
self.debug("Checking if filesystem exists: was forced to {}".format(self.force_exists))
|
if self.force_exists:
|
||||||
|
self.debug("Dataset should exist")
|
||||||
|
else:
|
||||||
|
self.debug("Dataset should not exist")
|
||||||
return self.force_exists
|
return self.force_exists
|
||||||
else:
|
else:
|
||||||
self.debug("Checking if filesystem exists")
|
return self.exists_check
|
||||||
|
|
||||||
return (self.zfs_node.run(tab_split=True, cmd=["zfs", "list", self.name], readonly=True, valid_exitcodes=[0, 1],
|
def create_filesystem(self, parents=False, unmountable=True):
|
||||||
hide_errors=True) and True)
|
|
||||||
|
|
||||||
def create_filesystem(self, parents=False):
|
|
||||||
"""create a filesystem
|
"""create a filesystem
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
:type parents: bool
|
:type parents: bool
|
||||||
"""
|
"""
|
||||||
if parents:
|
|
||||||
self.verbose("Creating filesystem and parents")
|
# recurse up
|
||||||
self.zfs_node.run(["zfs", "create", "-p", self.name])
|
if parents and self.parent and not self.parent.exists:
|
||||||
else:
|
self.parent.create_filesystem(parents, unmountable)
|
||||||
self.verbose("Creating filesystem")
|
|
||||||
self.zfs_node.run(["zfs", "create", self.name])
|
cmd = ["zfs", "create"]
|
||||||
|
|
||||||
|
if unmountable:
|
||||||
|
cmd.extend(["-o", "canmount=off"])
|
||||||
|
|
||||||
|
cmd.append(self.name)
|
||||||
|
self.zfs_node.run(cmd)
|
||||||
|
|
||||||
self.force_exists = True
|
self.force_exists = True
|
||||||
|
|
||||||
@ -318,9 +353,6 @@ class ZfsDataset:
|
|||||||
"zfs", "get", "-H", "-o", "property,value", "-p", "all", self.name
|
"zfs", "get", "-H", "-o", "property,value", "-p", "all", self.name
|
||||||
]
|
]
|
||||||
|
|
||||||
if not self.exists:
|
|
||||||
return {}
|
|
||||||
|
|
||||||
self.debug("Getting zfs properties")
|
self.debug("Getting zfs properties")
|
||||||
|
|
||||||
ret = {}
|
ret = {}
|
||||||
@ -341,7 +373,6 @@ class ZfsDataset:
|
|||||||
if min_changed_bytes == 0:
|
if min_changed_bytes == 0:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
if int(self.properties['written']) < min_changed_bytes:
|
if int(self.properties['written']) < min_changed_bytes:
|
||||||
return False
|
return False
|
||||||
else:
|
else:
|
||||||
@ -358,7 +389,7 @@ class ZfsDataset:
|
|||||||
|
|
||||||
@property
|
@property
|
||||||
def holds(self):
|
def holds(self):
|
||||||
"""get list of holds for dataset"""
|
"""get list[holds] for dataset"""
|
||||||
|
|
||||||
output = self.zfs_node.run(["zfs", "holds", "-H", self.name], valid_exitcodes=[0], tab_split=True,
|
output = self.zfs_node.run(["zfs", "holds", "-H", self.name], valid_exitcodes=[0], tab_split=True,
|
||||||
readonly=True)
|
readonly=True)
|
||||||
@ -401,15 +432,15 @@ class ZfsDataset:
|
|||||||
seconds = time.mktime(dt.timetuple())
|
seconds = time.mktime(dt.timetuple())
|
||||||
return seconds
|
return seconds
|
||||||
|
|
||||||
def from_names(self, names):
|
def from_names(self, names, force_exists=None):
|
||||||
"""convert a list of names to a list ZfsDatasets for this zfs_node
|
"""convert a list[names] to a list ZfsDatasets for this zfs_node
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
:type names: list of str
|
:type names: list[str]
|
||||||
"""
|
"""
|
||||||
ret = []
|
ret = []
|
||||||
for name in names:
|
for name in names:
|
||||||
ret.append(self.zfs_node.get_dataset(name))
|
ret.append(self.zfs_node.get_dataset(name, force_exists))
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
@ -428,8 +459,11 @@ class ZfsDataset:
|
|||||||
|
|
||||||
@CachedProperty
|
@CachedProperty
|
||||||
def snapshots(self):
|
def snapshots(self):
|
||||||
"""get all snapshots of this dataset"""
|
"""get all snapshots of this dataset
|
||||||
|
:rtype: ZfsDataset
|
||||||
|
"""
|
||||||
|
|
||||||
|
#FIXME: dont check for existance. (currenlty needed for _add_virtual_snapshots)
|
||||||
if not self.exists:
|
if not self.exists:
|
||||||
return []
|
return []
|
||||||
|
|
||||||
@ -439,11 +473,11 @@ class ZfsDataset:
|
|||||||
"zfs", "list", "-d", "1", "-r", "-t", "snapshot", "-H", "-o", "name", self.name
|
"zfs", "list", "-d", "1", "-r", "-t", "snapshot", "-H", "-o", "name", self.name
|
||||||
]
|
]
|
||||||
|
|
||||||
return self.from_names(self.zfs_node.run(cmd=cmd, readonly=True))
|
return self.from_names(self.zfs_node.run(cmd=cmd, readonly=True), force_exists=True)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def our_snapshots(self):
|
def our_snapshots(self):
|
||||||
"""get list of snapshots creates by us of this dataset"""
|
"""get list[snapshots] creates by us of this dataset"""
|
||||||
ret = []
|
ret = []
|
||||||
for snapshot in self.snapshots:
|
for snapshot in self.snapshots:
|
||||||
if snapshot.is_ours():
|
if snapshot.is_ours():
|
||||||
@ -538,7 +572,7 @@ class ZfsDataset:
|
|||||||
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", self.name
|
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", self.name
|
||||||
])
|
])
|
||||||
|
|
||||||
return self.from_names(names[1:])
|
return self.from_names(names[1:], force_exists=True)
|
||||||
|
|
||||||
@CachedProperty
|
@CachedProperty
|
||||||
def datasets(self, types="filesystem,volume"):
|
def datasets(self, types="filesystem,volume"):
|
||||||
@ -554,9 +588,10 @@ class ZfsDataset:
|
|||||||
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", "-d", "1", self.name
|
"zfs", "list", "-r", "-t", types, "-o", "name", "-H", "-d", "1", self.name
|
||||||
])
|
])
|
||||||
|
|
||||||
return self.from_names(names[1:])
|
return self.from_names(names[1:], force_exists=True)
|
||||||
|
|
||||||
def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded, send_pipes, zfs_compressed):
|
def send_pipe(self, features, prev_snapshot, resume_token, show_progress, raw, send_properties, write_embedded,
|
||||||
|
send_pipes, zfs_compressed):
|
||||||
"""returns a pipe with zfs send output for this snapshot
|
"""returns a pipe with zfs send output for this snapshot
|
||||||
|
|
||||||
resume_token: resume sending from this token. (in that case we don't
|
resume_token: resume sending from this token. (in that case we don't
|
||||||
@ -564,8 +599,8 @@ class ZfsDataset:
|
|||||||
|
|
||||||
Args:
|
Args:
|
||||||
:param send_pipes: output cmd array that will be added to actual zfs send command. (e.g. mbuffer or compression program)
|
:param send_pipes: output cmd array that will be added to actual zfs send command. (e.g. mbuffer or compression program)
|
||||||
:type send_pipes: list of str
|
:type send_pipes: list[str]
|
||||||
:type features: list of str
|
:type features: list[str]
|
||||||
:type prev_snapshot: ZfsDataset
|
:type prev_snapshot: ZfsDataset
|
||||||
:type resume_token: str
|
:type resume_token: str
|
||||||
:type show_progress: bool
|
:type show_progress: bool
|
||||||
@ -579,7 +614,7 @@ class ZfsDataset:
|
|||||||
# all kind of performance options:
|
# all kind of performance options:
|
||||||
if 'large_blocks' in features and "-L" in self.zfs_node.supported_send_options:
|
if 'large_blocks' in features and "-L" in self.zfs_node.supported_send_options:
|
||||||
# large block support (only if recordsize>128k which is seldomly used)
|
# large block support (only if recordsize>128k which is seldomly used)
|
||||||
cmd.append("-L") # --large-block
|
cmd.append("-L") # --large-block
|
||||||
|
|
||||||
if write_embedded and 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
|
if write_embedded and 'embedded_data' in features and "-e" in self.zfs_node.supported_send_options:
|
||||||
cmd.append("-e") # --embed; WRITE_EMBEDDED, more compact stream
|
cmd.append("-e") # --embed; WRITE_EMBEDDED, more compact stream
|
||||||
@ -593,8 +628,8 @@ class ZfsDataset:
|
|||||||
|
|
||||||
# progress output
|
# progress output
|
||||||
if show_progress:
|
if show_progress:
|
||||||
cmd.append("-v") # --verbose
|
cmd.append("-v") # --verbose
|
||||||
cmd.append("-P") # --parsable
|
cmd.append("-P") # --parsable
|
||||||
|
|
||||||
# resume a previous send? (don't need more parameters in that case)
|
# resume a previous send? (don't need more parameters in that case)
|
||||||
if resume_token:
|
if resume_token:
|
||||||
@ -603,7 +638,7 @@ class ZfsDataset:
|
|||||||
else:
|
else:
|
||||||
# send properties
|
# send properties
|
||||||
if send_properties:
|
if send_properties:
|
||||||
cmd.append("-p") # --props
|
cmd.append("-p") # --props
|
||||||
|
|
||||||
# incremental?
|
# incremental?
|
||||||
if prev_snapshot:
|
if prev_snapshot:
|
||||||
@ -617,7 +652,8 @@ class ZfsDataset:
|
|||||||
|
|
||||||
return output_pipe
|
return output_pipe
|
||||||
|
|
||||||
def recv_pipe(self, pipe, features, recv_pipes, filter_properties=None, set_properties=None, ignore_exit_code=False, force=False):
|
def recv_pipe(self, pipe, features, recv_pipes, filter_properties=None, set_properties=None, ignore_exit_code=False,
|
||||||
|
force=False):
|
||||||
"""starts a zfs recv for this snapshot and uses pipe as input
|
"""starts a zfs recv for this snapshot and uses pipe as input
|
||||||
|
|
||||||
note: you can it both on a snapshot or filesystem object. The
|
note: you can it both on a snapshot or filesystem object. The
|
||||||
@ -627,9 +663,9 @@ class ZfsDataset:
|
|||||||
Args:
|
Args:
|
||||||
:param recv_pipes: input cmd array that will be prepended to actual zfs recv command. (e.g. mbuffer or decompression program)
|
:param recv_pipes: input cmd array that will be prepended to actual zfs recv command. (e.g. mbuffer or decompression program)
|
||||||
:type pipe: subprocess.pOpen
|
:type pipe: subprocess.pOpen
|
||||||
:type features: list of str
|
:type features: list[str]
|
||||||
:type filter_properties: list of str
|
:type filter_properties: list[str]
|
||||||
:type set_properties: list of str
|
:type set_properties: list[str]
|
||||||
:type ignore_exit_code: bool
|
:type ignore_exit_code: bool
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@ -646,7 +682,7 @@ class ZfsDataset:
|
|||||||
|
|
||||||
cmd.extend(["zfs", "recv"])
|
cmd.extend(["zfs", "recv"])
|
||||||
|
|
||||||
# don't mount filesystem that is received
|
# don't let zfs recv mount everything thats received (even with canmount=noauto!)
|
||||||
cmd.append("-u")
|
cmd.append("-u")
|
||||||
|
|
||||||
for property_ in filter_properties:
|
for property_ in filter_properties:
|
||||||
@ -676,7 +712,7 @@ class ZfsDataset:
|
|||||||
# self.zfs_node.reset_progress()
|
# self.zfs_node.reset_progress()
|
||||||
self.zfs_node.run(cmd, inp=pipe, valid_exitcodes=valid_exitcodes)
|
self.zfs_node.run(cmd, inp=pipe, valid_exitcodes=valid_exitcodes)
|
||||||
|
|
||||||
# invalidate cache, but we at least know we exist now
|
# invalidate cache
|
||||||
self.invalidate()
|
self.invalidate()
|
||||||
|
|
||||||
# in test mode we assume everything was ok and it exists
|
# in test mode we assume everything was ok and it exists
|
||||||
@ -689,6 +725,34 @@ class ZfsDataset:
|
|||||||
self.error("error during transfer")
|
self.error("error during transfer")
|
||||||
raise (Exception("Target doesn't exist after transfer, something went wrong."))
|
raise (Exception("Target doesn't exist after transfer, something went wrong."))
|
||||||
|
|
||||||
|
# at this point we're sure the actual dataset exists
|
||||||
|
self.parent.force_exists = True
|
||||||
|
|
||||||
|
def automount(self):
|
||||||
|
"""Mount the dataset as if one did a zfs mount -a, but only for this dataset
|
||||||
|
Failure to mount doesnt result in an exception, but outputs errors to STDERR.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
self.debug("Auto mounting")
|
||||||
|
|
||||||
|
if self.properties['type'] != "filesystem":
|
||||||
|
return
|
||||||
|
|
||||||
|
if self.properties['canmount'] != 'on':
|
||||||
|
return
|
||||||
|
|
||||||
|
if self.properties['mountpoint'] == 'legacy':
|
||||||
|
return
|
||||||
|
|
||||||
|
if self.properties['mountpoint'] == 'none':
|
||||||
|
return
|
||||||
|
|
||||||
|
if self.properties['encryption'] != 'off' and self.properties['keystatus'] == 'unavailable':
|
||||||
|
return
|
||||||
|
|
||||||
|
self.zfs_node.run(["zfs", "mount", self.name], valid_exitcodes=[0,1])
|
||||||
|
|
||||||
def transfer_snapshot(self, target_snapshot, features, prev_snapshot, show_progress,
|
def transfer_snapshot(self, target_snapshot, features, prev_snapshot, show_progress,
|
||||||
filter_properties, set_properties, ignore_recv_exit_code, resume_token,
|
filter_properties, set_properties, ignore_recv_exit_code, resume_token,
|
||||||
raw, send_properties, write_embedded, send_pipes, recv_pipes, zfs_compressed, force):
|
raw, send_properties, write_embedded, send_pipes, recv_pipes, zfs_compressed, force):
|
||||||
@ -698,14 +762,14 @@ class ZfsDataset:
|
|||||||
connects a send_pipe() to recv_pipe()
|
connects a send_pipe() to recv_pipe()
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
:type send_pipes: list of str
|
:type send_pipes: list[str]
|
||||||
:type recv_pipes: list of str
|
:type recv_pipes: list[str]
|
||||||
:type target_snapshot: ZfsDataset
|
:type target_snapshot: ZfsDataset
|
||||||
:type features: list of str
|
:type features: list[str]
|
||||||
:type prev_snapshot: ZfsDataset
|
:type prev_snapshot: ZfsDataset
|
||||||
:type show_progress: bool
|
:type show_progress: bool
|
||||||
:type filter_properties: list of str
|
:type filter_properties: list[str]
|
||||||
:type set_properties: list of str
|
:type set_properties: list[str]
|
||||||
:type ignore_recv_exit_code: bool
|
:type ignore_recv_exit_code: bool
|
||||||
:type resume_token: str
|
:type resume_token: str
|
||||||
:type raw: bool
|
:type raw: bool
|
||||||
@ -719,20 +783,28 @@ class ZfsDataset:
|
|||||||
self.debug("Transfer snapshot to {}".format(target_snapshot.filesystem_name))
|
self.debug("Transfer snapshot to {}".format(target_snapshot.filesystem_name))
|
||||||
|
|
||||||
if resume_token:
|
if resume_token:
|
||||||
target_snapshot.verbose("resuming")
|
self.verbose("resuming")
|
||||||
|
|
||||||
# initial or increment
|
# initial or increment
|
||||||
if not prev_snapshot:
|
if not prev_snapshot:
|
||||||
target_snapshot.verbose("receiving full".format(self.snapshot_name))
|
self.verbose("-> {} (new)".format(target_snapshot.filesystem_name))
|
||||||
else:
|
else:
|
||||||
# incremental
|
# incremental
|
||||||
target_snapshot.verbose("receiving incremental".format(self.snapshot_name))
|
self.verbose("-> {}".format(target_snapshot.filesystem_name))
|
||||||
|
|
||||||
# do it
|
# do it
|
||||||
pipe = self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot,
|
pipe = self.send_pipe(features=features, show_progress=show_progress, prev_snapshot=prev_snapshot,
|
||||||
resume_token=resume_token, raw=raw, send_properties=send_properties, write_embedded=write_embedded, send_pipes=send_pipes, zfs_compressed=zfs_compressed)
|
resume_token=resume_token, raw=raw, send_properties=send_properties,
|
||||||
|
write_embedded=write_embedded, send_pipes=send_pipes, zfs_compressed=zfs_compressed)
|
||||||
target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties,
|
target_snapshot.recv_pipe(pipe, features=features, filter_properties=filter_properties,
|
||||||
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code, recv_pipes=recv_pipes, force=force)
|
set_properties=set_properties, ignore_exit_code=ignore_recv_exit_code,
|
||||||
|
recv_pipes=recv_pipes, force=force)
|
||||||
|
|
||||||
|
# try to automount it, if its the initial transfer
|
||||||
|
if not prev_snapshot:
|
||||||
|
# in test mode it doesnt actually exist, so dont try to mount it/read properties
|
||||||
|
if not target_snapshot.zfs_node.readonly:
|
||||||
|
target_snapshot.parent.automount()
|
||||||
|
|
||||||
def abort_resume(self):
|
def abort_resume(self):
|
||||||
"""abort current resume state"""
|
"""abort current resume state"""
|
||||||
@ -774,16 +846,16 @@ class ZfsDataset:
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
def thin_list(self, keeps=None, ignores=None):
|
def thin_list(self, keeps=None, ignores=None):
|
||||||
"""determines list of snapshots that should be kept or deleted based on
|
"""determines list[snapshots] that should be kept or deleted based on
|
||||||
the thinning schedule. cull the herd!
|
the thinning schedule. cull the herd!
|
||||||
|
|
||||||
returns: ( keeps, obsoletes )
|
returns: ( keeps, obsoletes )
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
:param keeps: list of snapshots to always keep (usually the last)
|
:param keeps: list[snapshots] to always keep (usually the last)
|
||||||
:param ignores: snapshots to completely ignore (usually incompatible target snapshots that are going to be destroyed anyway)
|
:param ignores: snapshots to completely ignore (usually incompatible target snapshots that are going to be destroyed anyway)
|
||||||
:type keeps: list of ZfsDataset
|
:type keeps: list[ZfsDataset]
|
||||||
:type ignores: list of ZfsDataset
|
:type ignores: list[ZfsDataset]
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if ignores is None:
|
if ignores is None:
|
||||||
@ -810,23 +882,29 @@ class ZfsDataset:
|
|||||||
obsolete.destroy()
|
obsolete.destroy()
|
||||||
self.snapshots.remove(obsolete)
|
self.snapshots.remove(obsolete)
|
||||||
|
|
||||||
def find_common_snapshot(self, target_dataset):
|
def find_common_snapshot(self, target_dataset, guid_check):
|
||||||
"""find latest common snapshot between us and target returns None if its
|
"""find latest common snapshot between us and target returns None if its
|
||||||
an initial transfer
|
an initial transfer
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
:type guid_check: bool
|
||||||
:type target_dataset: ZfsDataset
|
:type target_dataset: ZfsDataset
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if not target_dataset.snapshots:
|
if not target_dataset.snapshots:
|
||||||
# target has nothing yet
|
# target has nothing yet
|
||||||
return None
|
return None
|
||||||
else:
|
else:
|
||||||
for source_snapshot in reversed(self.snapshots):
|
for source_snapshot in reversed(self.snapshots):
|
||||||
if target_dataset.find_snapshot(source_snapshot):
|
target_snapshot = target_dataset.find_snapshot(source_snapshot)
|
||||||
source_snapshot.debug("common snapshot")
|
if target_snapshot:
|
||||||
return source_snapshot
|
if guid_check and source_snapshot.properties['guid'] != target_snapshot.properties['guid']:
|
||||||
target_dataset.error("Cant find common snapshot with source.")
|
target_snapshot.warning("Common snapshot has invalid guid, ignoring.")
|
||||||
raise (Exception("You probably need to delete the target dataset to fix this."))
|
else:
|
||||||
|
target_snapshot.debug("common snapshot")
|
||||||
|
return source_snapshot
|
||||||
|
# target_dataset.error("Cant find common snapshot with source.")
|
||||||
|
raise (Exception("Cant find common snapshot with target."))
|
||||||
|
|
||||||
def find_start_snapshot(self, common_snapshot, also_other_snapshots):
|
def find_start_snapshot(self, common_snapshot, also_other_snapshots):
|
||||||
"""finds first snapshot to send :rtype: ZfsDataset or None if we cant
|
"""finds first snapshot to send :rtype: ZfsDataset or None if we cant
|
||||||
@ -853,13 +931,16 @@ class ZfsDataset:
|
|||||||
|
|
||||||
return start_snapshot
|
return start_snapshot
|
||||||
|
|
||||||
def find_incompatible_snapshots(self, common_snapshot):
|
def find_incompatible_snapshots(self, common_snapshot, raw):
|
||||||
"""returns a list of snapshots that is incompatible for a zfs recv onto
|
"""returns a list[snapshots] that is incompatible for a zfs recv onto
|
||||||
the common_snapshot. all direct followup snapshots with written=0 are
|
the common_snapshot. all direct followup snapshots with written=0 are
|
||||||
compatible.
|
compatible.
|
||||||
|
|
||||||
|
in raw-mode nothing is compatible. issue #219
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
:type common_snapshot: ZfsDataset
|
:type common_snapshot: ZfsDataset
|
||||||
|
:type raw: bool
|
||||||
"""
|
"""
|
||||||
|
|
||||||
ret = []
|
ret = []
|
||||||
@ -867,7 +948,7 @@ class ZfsDataset:
|
|||||||
if common_snapshot and self.snapshots:
|
if common_snapshot and self.snapshots:
|
||||||
followup = True
|
followup = True
|
||||||
for snapshot in self.snapshots[self.find_snapshot_index(common_snapshot) + 1:]:
|
for snapshot in self.snapshots[self.find_snapshot_index(common_snapshot) + 1:]:
|
||||||
if not followup or int(snapshot.properties['written']) != 0:
|
if raw or not followup or int(snapshot.properties['written']) != 0:
|
||||||
followup = False
|
followup = False
|
||||||
ret.append(snapshot)
|
ret.append(snapshot)
|
||||||
|
|
||||||
@ -877,8 +958,8 @@ class ZfsDataset:
|
|||||||
"""only returns lists of allowed properties for this dataset type
|
"""only returns lists of allowed properties for this dataset type
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
:type filter_properties: list of str
|
:type filter_properties: list[str]
|
||||||
:type set_properties: list of str
|
:type set_properties: list[str]
|
||||||
"""
|
"""
|
||||||
|
|
||||||
allowed_filter_properties = []
|
allowed_filter_properties = []
|
||||||
@ -910,7 +991,8 @@ class ZfsDataset:
|
|||||||
while snapshot:
|
while snapshot:
|
||||||
# create virtual target snapsho
|
# create virtual target snapsho
|
||||||
# NOTE: with force_exist we're telling the dataset it doesnt exist yet. (e.g. its virtual)
|
# NOTE: with force_exist we're telling the dataset it doesnt exist yet. (e.g. its virtual)
|
||||||
virtual_snapshot = self.zfs_node.get_dataset(self.filesystem_name + "@" + snapshot.snapshot_name, force_exists=False)
|
virtual_snapshot = self.zfs_node.get_dataset(self.filesystem_name + "@" + snapshot.snapshot_name,
|
||||||
|
force_exists=False)
|
||||||
self.snapshots.append(virtual_snapshot)
|
self.snapshots.append(virtual_snapshot)
|
||||||
snapshot = source_dataset.find_next_snapshot(snapshot, also_other_snapshots)
|
snapshot = source_dataset.find_next_snapshot(snapshot, also_other_snapshots)
|
||||||
|
|
||||||
@ -920,9 +1002,9 @@ class ZfsDataset:
|
|||||||
Args:
|
Args:
|
||||||
:type common_snapshot: ZfsDataset
|
:type common_snapshot: ZfsDataset
|
||||||
:type target_dataset: ZfsDataset
|
:type target_dataset: ZfsDataset
|
||||||
:type source_obsoletes: list of ZfsDataset
|
:type source_obsoletes: list[ZfsDataset]
|
||||||
:type target_obsoletes: list of ZfsDataset
|
:type target_obsoletes: list[ZfsDataset]
|
||||||
:type target_keeps: list of ZfsDataset
|
:type target_keeps: list[ZfsDataset]
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# on source: destroy all obsoletes before common. (since we cant send them anyways)
|
# on source: destroy all obsoletes before common. (since we cant send them anyways)
|
||||||
@ -944,7 +1026,7 @@ class ZfsDataset:
|
|||||||
# on target: destroy everything thats obsolete, except common_snapshot
|
# on target: destroy everything thats obsolete, except common_snapshot
|
||||||
for target_snapshot in target_dataset.snapshots:
|
for target_snapshot in target_dataset.snapshots:
|
||||||
if (target_snapshot in target_obsoletes) \
|
if (target_snapshot in target_obsoletes) \
|
||||||
and ( not common_snapshot or (target_snapshot.snapshot_name != common_snapshot.snapshot_name)):
|
and (not common_snapshot or (target_snapshot.snapshot_name != common_snapshot.snapshot_name)):
|
||||||
if target_snapshot.exists:
|
if target_snapshot.exists:
|
||||||
target_snapshot.destroy()
|
target_snapshot.destroy()
|
||||||
|
|
||||||
@ -956,8 +1038,8 @@ class ZfsDataset:
|
|||||||
:type start_snapshot: ZfsDataset
|
:type start_snapshot: ZfsDataset
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if 'receive_resume_token' in target_dataset.properties:
|
if target_dataset.exists and 'receive_resume_token' in target_dataset.properties:
|
||||||
if start_snapshot==None:
|
if start_snapshot == None:
|
||||||
target_dataset.verbose("Aborting resume, its obsolete.")
|
target_dataset.verbose("Aborting resume, its obsolete.")
|
||||||
target_dataset.abort_resume()
|
target_dataset.abort_resume()
|
||||||
else:
|
else:
|
||||||
@ -970,20 +1052,22 @@ class ZfsDataset:
|
|||||||
else:
|
else:
|
||||||
return resume_token
|
return resume_token
|
||||||
|
|
||||||
def _plan_sync(self, target_dataset, also_other_snapshots):
|
def _plan_sync(self, target_dataset, also_other_snapshots, guid_check, raw):
|
||||||
"""plan where to start syncing and what to sync and what to keep
|
"""plan where to start syncing and what to sync and what to keep
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
:rtype: ( ZfsDataset, ZfsDataset, list of ZfsDataset, list of ZfsDataset, list of ZfsDataset, list of ZfsDataset )
|
:rtype: ( ZfsDataset, ZfsDataset, list[ZfsDataset], list[ZfsDataset], list[ZfsDataset], list[ZfsDataset] )
|
||||||
:type target_dataset: ZfsDataset
|
:type target_dataset: ZfsDataset
|
||||||
:type also_other_snapshots: bool
|
:type also_other_snapshots: bool
|
||||||
|
:type guid_check: bool
|
||||||
|
:type raw: bool
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# determine common and start snapshot
|
# determine common and start snapshot
|
||||||
target_dataset.debug("Determining start snapshot")
|
target_dataset.debug("Determining start snapshot")
|
||||||
common_snapshot = self.find_common_snapshot(target_dataset)
|
common_snapshot = self.find_common_snapshot(target_dataset, guid_check=guid_check)
|
||||||
start_snapshot = self.find_start_snapshot(common_snapshot, also_other_snapshots)
|
start_snapshot = self.find_start_snapshot(common_snapshot, also_other_snapshots)
|
||||||
incompatible_target_snapshots = target_dataset.find_incompatible_snapshots(common_snapshot)
|
incompatible_target_snapshots = target_dataset.find_incompatible_snapshots(common_snapshot, raw)
|
||||||
|
|
||||||
# let thinner decide whats obsolete on source
|
# let thinner decide whats obsolete on source
|
||||||
source_obsoletes = []
|
source_obsoletes = []
|
||||||
@ -1005,7 +1089,7 @@ class ZfsDataset:
|
|||||||
what to do
|
what to do
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
:type incompatible_target_snapshots: list of ZfsDataset
|
:type incompatible_target_snapshots: list[ZfsDataset]
|
||||||
:type destroy_incompatible: bool
|
:type destroy_incompatible: bool
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@ -1013,42 +1097,60 @@ class ZfsDataset:
|
|||||||
if not destroy_incompatible:
|
if not destroy_incompatible:
|
||||||
for snapshot in incompatible_target_snapshots:
|
for snapshot in incompatible_target_snapshots:
|
||||||
snapshot.error("Incompatible snapshot")
|
snapshot.error("Incompatible snapshot")
|
||||||
raise (Exception("Please destroy incompatible snapshots or use --destroy-incompatible."))
|
raise (Exception("Please destroy incompatible snapshots on target, or use --destroy-incompatible."))
|
||||||
else:
|
else:
|
||||||
for snapshot in incompatible_target_snapshots:
|
for snapshot in incompatible_target_snapshots:
|
||||||
snapshot.verbose("Incompatible snapshot")
|
snapshot.verbose("Incompatible snapshot")
|
||||||
snapshot.destroy()
|
snapshot.destroy(fail_exception=True)
|
||||||
self.snapshots.remove(snapshot)
|
self.snapshots.remove(snapshot)
|
||||||
|
|
||||||
|
if len(incompatible_target_snapshots) > 0:
|
||||||
|
self.rollback()
|
||||||
|
|
||||||
def sync_snapshots(self, target_dataset, features, show_progress, filter_properties, set_properties,
|
def sync_snapshots(self, target_dataset, features, show_progress, filter_properties, set_properties,
|
||||||
ignore_recv_exit_code, holds, rollback, decrypt, encrypt, also_other_snapshots,
|
ignore_recv_exit_code, holds, rollback, decrypt, encrypt, also_other_snapshots,
|
||||||
no_send, destroy_incompatible, send_pipes, recv_pipes, zfs_compressed, force):
|
no_send, destroy_incompatible, send_pipes, recv_pipes, zfs_compressed, force, guid_check):
|
||||||
"""sync this dataset's snapshots to target_dataset, while also thinning
|
"""sync this dataset's snapshots to target_dataset, while also thinning
|
||||||
out old snapshots along the way.
|
out old snapshots along the way.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
:type send_pipes: list of str
|
:type send_pipes: list[str]
|
||||||
:type recv_pipes: list of str
|
:type recv_pipes: list[str]
|
||||||
:type target_dataset: ZfsDataset
|
:type target_dataset: ZfsDataset
|
||||||
:type features: list of str
|
:type features: list[str]
|
||||||
:type show_progress: bool
|
:type show_progress: bool
|
||||||
:type filter_properties: list of str
|
:type filter_properties: list[str]
|
||||||
:type set_properties: list of str
|
:type set_properties: list[str]
|
||||||
:type ignore_recv_exit_code: bool
|
:type ignore_recv_exit_code: bool
|
||||||
:type holds: bool
|
:type holds: bool
|
||||||
:type rollback: bool
|
:type rollback: bool
|
||||||
:type decrypt: bool
|
:type decrypt: bool
|
||||||
:type also_other_snapshots: bool
|
:type also_other_snapshots: bool
|
||||||
:type no_send: bool
|
:type no_send: bool
|
||||||
:type destroy_incompatible: bool
|
:type guid_check: bool
|
||||||
"""
|
"""
|
||||||
|
|
||||||
self.verbose("sending to {}".format(target_dataset))
|
# self.verbose("-> {}".format(target_dataset))
|
||||||
|
|
||||||
|
# defaults for these settings if there is no encryption stuff going on:
|
||||||
|
send_properties = True
|
||||||
|
raw = False
|
||||||
|
write_embedded = True
|
||||||
|
|
||||||
|
# source dataset encrypted?
|
||||||
|
if self.properties.get('encryption', 'off') != 'off':
|
||||||
|
# user wants to send it over decrypted?
|
||||||
|
if decrypt:
|
||||||
|
# when decrypting, zfs cant send properties
|
||||||
|
send_properties = False
|
||||||
|
else:
|
||||||
|
# keep data encrypted by sending it raw (including properties)
|
||||||
|
raw = True
|
||||||
|
|
||||||
(common_snapshot, start_snapshot, source_obsoletes, target_obsoletes, target_keeps,
|
(common_snapshot, start_snapshot, source_obsoletes, target_obsoletes, target_keeps,
|
||||||
incompatible_target_snapshots) = \
|
incompatible_target_snapshots) = \
|
||||||
self._plan_sync(target_dataset=target_dataset, also_other_snapshots=also_other_snapshots)
|
self._plan_sync(target_dataset=target_dataset, also_other_snapshots=also_other_snapshots,
|
||||||
|
guid_check=guid_check, raw=raw)
|
||||||
|
|
||||||
# NOTE: we do this because we dont want filesystems to fillup when backups keep failing.
|
# NOTE: we do this because we dont want filesystems to fillup when backups keep failing.
|
||||||
# Also usefull with no_send to still cleanup stuff.
|
# Also usefull with no_send to still cleanup stuff.
|
||||||
@ -1066,42 +1168,29 @@ class ZfsDataset:
|
|||||||
# check if we can resume
|
# check if we can resume
|
||||||
resume_token = self._validate_resume_token(target_dataset, start_snapshot)
|
resume_token = self._validate_resume_token(target_dataset, start_snapshot)
|
||||||
|
|
||||||
# rollback target to latest?
|
(active_filter_properties, active_set_properties) = self.get_allowed_properties(filter_properties,
|
||||||
if rollback:
|
set_properties)
|
||||||
target_dataset.rollback()
|
|
||||||
|
|
||||||
#defaults for these settings if there is no encryption stuff going on:
|
|
||||||
send_properties = True
|
|
||||||
raw = False
|
|
||||||
write_embedded = True
|
|
||||||
|
|
||||||
(active_filter_properties, active_set_properties) = self.get_allowed_properties(filter_properties, set_properties)
|
|
||||||
|
|
||||||
# source dataset encrypted?
|
|
||||||
if self.properties.get('encryption', 'off')!='off':
|
|
||||||
# user wants to send it over decrypted?
|
|
||||||
if decrypt:
|
|
||||||
# when decrypting, zfs cant send properties
|
|
||||||
send_properties=False
|
|
||||||
else:
|
|
||||||
# keep data encrypted by sending it raw (including properties)
|
|
||||||
raw=True
|
|
||||||
|
|
||||||
# encrypt at target?
|
# encrypt at target?
|
||||||
if encrypt and not raw:
|
if encrypt and not raw:
|
||||||
# filter out encryption properties to let encryption on the target take place
|
# filter out encryption properties to let encryption on the target take place
|
||||||
active_filter_properties.extend(["keylocation","pbkdf2iters","keyformat", "encryption"])
|
active_filter_properties.extend(["keylocation", "pbkdf2iters", "keyformat", "encryption"])
|
||||||
write_embedded=False
|
write_embedded = False
|
||||||
|
|
||||||
|
|
||||||
# now actually transfer the snapshots
|
# now actually transfer the snapshots
|
||||||
prev_source_snapshot = common_snapshot
|
prev_source_snapshot = common_snapshot
|
||||||
source_snapshot = start_snapshot
|
source_snapshot = start_snapshot
|
||||||
|
do_rollback = rollback
|
||||||
while source_snapshot:
|
while source_snapshot:
|
||||||
target_snapshot = target_dataset.find_snapshot(source_snapshot) # still virtual
|
target_snapshot = target_dataset.find_snapshot(source_snapshot) # still virtual
|
||||||
|
|
||||||
# does target actually want it?
|
# does target actually want it?
|
||||||
if target_snapshot not in target_obsoletes:
|
if target_snapshot not in target_obsoletes and not source_snapshot.is_excluded:
|
||||||
|
|
||||||
|
# do the rollback, one time at first transfer
|
||||||
|
if do_rollback:
|
||||||
|
target_dataset.rollback()
|
||||||
|
do_rollback = False
|
||||||
|
|
||||||
source_snapshot.transfer_snapshot(target_snapshot, features=features,
|
source_snapshot.transfer_snapshot(target_snapshot, features=features,
|
||||||
prev_snapshot=prev_source_snapshot, show_progress=show_progress,
|
prev_snapshot=prev_source_snapshot, show_progress=show_progress,
|
||||||
@ -1155,15 +1244,14 @@ class ZfsDataset:
|
|||||||
|
|
||||||
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
|
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
|
||||||
|
|
||||||
def unmount(self):
|
def unmount(self, mount_point):
|
||||||
|
|
||||||
self.debug("Unmounting")
|
self.debug("Unmounting")
|
||||||
|
|
||||||
cmd = [
|
cmd = [
|
||||||
"umount", self.name
|
"umount", mount_point
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
|
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
|
||||||
|
|
||||||
def clone(self, name):
|
def clone(self, name):
|
||||||
@ -1204,4 +1292,3 @@ class ZfsDataset:
|
|||||||
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
|
self.zfs_node.run(cmd=cmd, valid_exitcodes=[0])
|
||||||
|
|
||||||
self.invalidate()
|
self.invalidate()
|
||||||
|
|
||||||
|
@ -12,6 +12,7 @@ from .CachedProperty import CachedProperty
|
|||||||
from .ZfsPool import ZfsPool
|
from .ZfsPool import ZfsPool
|
||||||
from .ZfsDataset import ZfsDataset
|
from .ZfsDataset import ZfsDataset
|
||||||
from .ExecuteNode import ExecuteError
|
from .ExecuteNode import ExecuteError
|
||||||
|
from .util import datetime_now
|
||||||
|
|
||||||
|
|
||||||
class ZfsNode(ExecuteNode):
|
class ZfsNode(ExecuteNode):
|
||||||
@ -19,7 +20,7 @@ class ZfsNode(ExecuteNode):
|
|||||||
|
|
||||||
def __init__(self, logger, utc=False, snapshot_time_format="", hold_name="", ssh_config=None, ssh_to=None, readonly=False,
|
def __init__(self, logger, utc=False, snapshot_time_format="", hold_name="", ssh_config=None, ssh_to=None, readonly=False,
|
||||||
description="",
|
description="",
|
||||||
debug_output=False, thinner=None):
|
debug_output=False, thinner=None, exclude_snapshot_patterns=[]):
|
||||||
|
|
||||||
self.utc = utc
|
self.utc = utc
|
||||||
self.snapshot_time_format = snapshot_time_format
|
self.snapshot_time_format = snapshot_time_format
|
||||||
@ -29,6 +30,8 @@ class ZfsNode(ExecuteNode):
|
|||||||
|
|
||||||
self.logger = logger
|
self.logger = logger
|
||||||
|
|
||||||
|
self.exclude_snapshot_patterns = exclude_snapshot_patterns
|
||||||
|
|
||||||
if ssh_config:
|
if ssh_config:
|
||||||
self.verbose("Using custom SSH config: {}".format(ssh_config))
|
self.verbose("Using custom SSH config: {}".format(ssh_config))
|
||||||
|
|
||||||
@ -59,7 +62,8 @@ class ZfsNode(ExecuteNode):
|
|||||||
def thin(self, objects, keep_objects):
|
def thin(self, objects, keep_objects):
|
||||||
# NOTE: if thinning is disabled with --no-thinning, self.__thinner will be none.
|
# NOTE: if thinning is disabled with --no-thinning, self.__thinner will be none.
|
||||||
if self.__thinner is not None:
|
if self.__thinner is not None:
|
||||||
return self.__thinner.thin(objects, keep_objects)
|
|
||||||
|
return self.__thinner.thin(objects, keep_objects, datetime_now(self.utc).timestamp())
|
||||||
else:
|
else:
|
||||||
return (keep_objects, [])
|
return (keep_objects, [])
|
||||||
|
|
||||||
|
@ -1,129 +0,0 @@
|
|||||||
import os.path
|
|
||||||
import os
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
from signal import signal, SIGPIPE
|
|
||||||
|
|
||||||
import util
|
|
||||||
|
|
||||||
signal(SIGPIPE, util.sigpipe_handler)
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
|
||||||
print ("voor eerste")
|
|
||||||
raise Exception("eerstre")
|
|
||||||
except Exception as e:
|
|
||||||
print ("voor tweede")
|
|
||||||
raise Exception("tweede")
|
|
||||||
finally:
|
|
||||||
print ("JO")
|
|
||||||
|
|
||||||
def generator():
|
|
||||||
|
|
||||||
try:
|
|
||||||
util.deb('in generator')
|
|
||||||
print ("TRIGGER SIGPIPE")
|
|
||||||
sys.stdout.flush()
|
|
||||||
util.deb('after trigger')
|
|
||||||
|
|
||||||
# if False:
|
|
||||||
yield ("bla")
|
|
||||||
# yield ("bla")
|
|
||||||
|
|
||||||
except GeneratorExit as e:
|
|
||||||
util.deb('GENEXIT '+str(e))
|
|
||||||
raise
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
util.deb('EXCEPT '+str(e))
|
|
||||||
finally:
|
|
||||||
util.deb('FINALLY')
|
|
||||||
print("nog iets")
|
|
||||||
sys.stdout.flush()
|
|
||||||
util.deb('after print in finally WOOP!')
|
|
||||||
|
|
||||||
|
|
||||||
util.deb('START')
|
|
||||||
g=generator()
|
|
||||||
util.deb('after generator')
|
|
||||||
for bla in g:
|
|
||||||
# print ("heb wat ontvangen")
|
|
||||||
util.deb('ontvangen van gen')
|
|
||||||
break
|
|
||||||
# raise Exception("moi")
|
|
||||||
|
|
||||||
pass
|
|
||||||
raise Exception("moi")
|
|
||||||
|
|
||||||
util.deb('after for')
|
|
||||||
|
|
||||||
while True:
|
|
||||||
pass
|
|
||||||
|
|
||||||
#
|
|
||||||
# with open('test.py', 'rb') as fh:
|
|
||||||
#
|
|
||||||
# # fsize = fh.seek(10000, os.SEEK_END)
|
|
||||||
# # print(fsize)
|
|
||||||
#
|
|
||||||
# start=time.time()
|
|
||||||
# for i in range(0,1000000):
|
|
||||||
# # fh.seek(0, 0)
|
|
||||||
# fsize=fh.seek(0, os.SEEK_END)
|
|
||||||
# # fsize=fh.tell()
|
|
||||||
# # os.path.getsize('test.py')
|
|
||||||
# print(time.time()-start)
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# print(fh.tell())
|
|
||||||
#
|
|
||||||
# sys.exit(0)
|
|
||||||
#
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# checked=1
|
|
||||||
# skipped=1
|
|
||||||
# coverage=0.1
|
|
||||||
#
|
|
||||||
# max_skip=0
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# skipinarow=0
|
|
||||||
# while True:
|
|
||||||
# total=checked+skipped
|
|
||||||
#
|
|
||||||
# skip=coverage<random()
|
|
||||||
# if skip:
|
|
||||||
# skipped = skipped + 1
|
|
||||||
# print("S {:.2f}%".format(checked * 100 / total))
|
|
||||||
#
|
|
||||||
# skipinarow = skipinarow+1
|
|
||||||
# if skipinarow>max_skip:
|
|
||||||
# max_skip=skipinarow
|
|
||||||
# else:
|
|
||||||
# skipinarow=0
|
|
||||||
# checked=checked+1
|
|
||||||
# print("C {:.2f}%".format(checked * 100 / total))
|
|
||||||
#
|
|
||||||
# print(max_skip)
|
|
||||||
#
|
|
||||||
# skip=0
|
|
||||||
# while True:
|
|
||||||
#
|
|
||||||
# total=checked+skipped
|
|
||||||
# if skip>0:
|
|
||||||
# skip=skip-1
|
|
||||||
# skipped = skipped + 1
|
|
||||||
# print("S {:.2f}%".format(checked * 100 / total))
|
|
||||||
# else:
|
|
||||||
# checked=checked+1
|
|
||||||
# print("C {:.2f}%".format(checked * 100 / total))
|
|
||||||
#
|
|
||||||
# #calc new skip
|
|
||||||
# skip=skip+((1/coverage)-1)*(random()*2)
|
|
||||||
# # print(skip)
|
|
||||||
# if skip> max_skip:
|
|
||||||
# max_skip=skip
|
|
||||||
#
|
|
||||||
# print(max_skip)
|
|
@ -1,21 +1,9 @@
|
|||||||
# root@psyt14s:/home/psy/zfs_autobackup# ls -lh /home/psy/Downloads/carimage.zip
|
|
||||||
# -rw-rw-r-- 1 psy psy 990M Nov 26 2020 /home/psy/Downloads/carimage.zip
|
|
||||||
# root@psyt14s:/home/psy/zfs_autobackup# time sha1sum /home/psy/Downloads/carimage.zip
|
|
||||||
# a682e1a36e16fe0d0c2f011104f4a99004f19105 /home/psy/Downloads/carimage.zip
|
|
||||||
#
|
|
||||||
# real 0m2.558s
|
|
||||||
# user 0m2.105s
|
|
||||||
# sys 0m0.448s
|
|
||||||
# root@psyt14s:/home/psy/zfs_autobackup# time python3 -m zfs_autobackup.ZfsCheck
|
|
||||||
#
|
|
||||||
# real 0m1.459s
|
|
||||||
# user 0m0.993s
|
|
||||||
# sys 0m0.462s
|
|
||||||
|
|
||||||
# NOTE: surprisingly sha1 in via python3 is faster than the native sha1sum utility, even in the way we use below!
|
# NOTE: surprisingly sha1 in via python3 is faster than the native sha1sum utility, even in the way we use below!
|
||||||
import os
|
import os
|
||||||
import platform
|
import platform
|
||||||
import sys
|
import sys
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
|
||||||
def tmp_name(suffix=""):
|
def tmp_name(suffix=""):
|
||||||
@ -48,7 +36,7 @@ def output_redir():
|
|||||||
def sigpipe_handler(sig, stack):
|
def sigpipe_handler(sig, stack):
|
||||||
#redir output so we dont get more SIGPIPES during cleanup. (which my try to write to stdout)
|
#redir output so we dont get more SIGPIPES during cleanup. (which my try to write to stdout)
|
||||||
output_redir()
|
output_redir()
|
||||||
deb('redir')
|
#deb('redir')
|
||||||
|
|
||||||
# def check_output():
|
# def check_output():
|
||||||
# """make sure stdout still functions. if its broken, this will trigger a SIGPIPE which will be handled by the sigpipe_handler."""
|
# """make sure stdout still functions. if its broken, this will trigger a SIGPIPE which will be handled by the sigpipe_handler."""
|
||||||
@ -63,3 +51,13 @@ def sigpipe_handler(sig, stack):
|
|||||||
# fh.write("DEB: "+txt+"\n")
|
# fh.write("DEB: "+txt+"\n")
|
||||||
|
|
||||||
|
|
||||||
|
# This should be the only source of trueth for the current datetime.
|
||||||
|
# This function will be mocked during unit testing.
|
||||||
|
|
||||||
|
|
||||||
|
datetime_now_mock=None
|
||||||
|
def datetime_now(utc):
|
||||||
|
if datetime_now_mock is None:
|
||||||
|
return( datetime.utcnow() if utc else datetime.now())
|
||||||
|
else:
|
||||||
|
return datetime_now_mock
|
||||||
|
Loading…
x
Reference in New Issue
Block a user