Go into the root directory
You can look in the filesystem and you'll see it's contents and the .zfs folder
Make a final "migration" snapshot that represents the lastest old-pool/filesystem
You can see there are 3 snapshots in there
Do a zfs backup of the oldest snapshot and pipe that into a zfs restore. This is will make the filesystem in the new-pool. You could also do this over ssh.
Now you do an incremental (-i) backup and restore using the first snapshot you used above and the one that comes after it. The key here is that incremental backups expect there to be a pre-existing new-pool/filesystem, this is how it is diffrent from the non-incremental backup above.
Do an incremental on the next pair. This happens to be with the last "migration" snapshot.
When you look in the new-pool/filesystem you'll see that it has been populated from the last migration snapshot.
And all three snapshots are present
There will be some other interesting things to do as soon as there is a "zfs remove _device_"
[hostname:/] root# cd /
You can look in the filesystem and you'll see it's contents and the .zfs folder
[hostname:/] root# ls -l old-pool/filesystem total 24 dr-xr-xr-x 3 root root 3 Mar 31 00:34 .zfs/ drwxr-xr-x 18 root root 101 Mar 11 21:45 etc/ drwx------ 4 root root 16 Feb 19 08:07 root/ drwxr-xr-x 3 root root 3 Feb 19 21:18 usr_local_etc/ drwxr-xr-x 3 root root 3 Feb 19 21:18 usr_local_var_db_mysql/
Make a final "migration" snapshot that represents the lastest old-pool/filesystem
[hostname:/] root# zfs snapshot old-pool/filesystem@migration
You can see there are 3 snapshots in there
[hostname:/] root# ls -l old-pool/filesystem/.zfs/snapshot/ 20060329/ 20060330/ migration/
Do a zfs backup of the oldest snapshot and pipe that into a zfs restore. This is will make the filesystem in the new-pool. You could also do this over ssh.
[hostname:/] root# zfs backup old-pool/filesystem@20060329 | zfs restore new-pool/filesystem@20060329
Now you do an incremental (-i) backup and restore using the first snapshot you used above and the one that comes after it. The key here is that incremental backups expect there to be a pre-existing new-pool/filesystem, this is how it is diffrent from the non-incremental backup above.
[hostname:/] root# zfs backup -i old-pool/filesystem@20060329 old-pool/filesystem@20060330 | zfs restore new-pool/filesystem
Do an incremental on the next pair. This happens to be with the last "migration" snapshot.
[hostname:/] root# zfs backup -i old-pool/filesystem@20060330 old-pool/filesystem@migration | zfs restore new-pool/filesystem
When you look in the new-pool/filesystem you'll see that it has been populated from the last migration snapshot.
[hostname:/] root# ls -l new-pool/filesystem total 24 dr-xr-xr-x 3 root root 3 Mar 31 00:34 .zfs/ drwxr-xr-x 18 root root 101 Mar 11 21:45 etc/ drwx------ 4 root root 16 Feb 19 08:07 root/ drwxr-xr-x 3 root root 3 Feb 19 21:18 usr_local_etc/ drwxr-xr-x 3 root root 3 Feb 19 21:18 usr_local_var_db_mysql/
And all three snapshots are present
[hostname:/] root# ls -l new-pool/filesystem 20060329/ 20060330/ migration/
There will be some other interesting things to do as soon as there is a "zfs remove _device_"
The subcommands were renamed a while back.
'zfs backup' is now 'zfs send'
'zfs restore' is now 'zfs recieve'