Popular New Releases in Incremental Backup
RsyncOSX
Version 6.7.2
clsync
v0.4.5
grsync
v1.6.1
rusync
v0.7.2
rsync4j
Release v3.2.3-8
Popular Libraries in Incremental Backup
by rsnapshot perl
2119 GPL-2.0
a tool for backing up your data using rsync (if you want to get help, use https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss)
by sickill shell
993 MIT
"DIY Dropbox" or "2-way directory (r)sync with proper deletion"
by rsyncOSX swift
976 MIT
A macOS GUI for rsync. Compiled for macOS Big Sur
by osxfuse c
879 GPL-2.0
File system based on the SSH File Transfer Protocol
by WayneD c
703 NOASSERTION
An open source utility that provides fast incremental file transfer. It also has useful features for backup and restore operations among many other use cases.
by mateogianolio javascript
700 NOASSERTION
Auto-sync files or directories over SSH.
by Redundancy go
504 MIT
gosync is a library for Golang styled around zsync / rsync, written with the intent that it enables efficient differential file transfer in a number of ways. NB: I am unable to contribute to this at the moment
by furier javascript
436 MIT
websync is intended to be an rsync manager, where rsync tasks can be added, scheduled and maintained in a sane manner.
by corpnewt python
415
An even more robust edition of my previous MountEFI scripts
Trending New libraries in Incremental Backup
by WayneD c
703 NOASSERTION
An open source utility that provides fast incremental file transfer. It also has useful features for backup and restore operations among many other use cases.
by nv-legate python
103 Apache-2.0
An Aspiring Drop-In Replacement for NumPy at Scale
by gokrazy go
61 BSD-3-Clause
gokrazy rsync
by rsyncOSX swift
45 MIT
A SwiftUI based macOS GUI for rsync. Compiled for macOS Monterey.
by labmlai python
36 MIT
🕹 Run Python code on a remote computers
by lab-ml python
35 MIT
🕹 Run Python code on a remote computers
by laktak python
32 MIT
A status/progress bar for rsync
by lamhoangtung python
30 MIT
Create SSH tunel to a running colab notebook
by gencer shell
22 MIT
Tarball, Rsync & S3 Cache Kit for Buildkite. Supports Linux, macOS and Windows
Top Authors in Incremental Backup
1
3 Libraries
1038
2
2 Libraries
125
3
2 Libraries
345
4
2 Libraries
247
5
2 Libraries
22
6
2 Libraries
142
7
2 Libraries
16
8
2 Libraries
82
9
2 Libraries
91
10
1 Libraries
3
1
3 Libraries
1038
2
2 Libraries
125
3
2 Libraries
345
4
2 Libraries
247
5
2 Libraries
22
6
2 Libraries
142
7
2 Libraries
16
8
2 Libraries
82
9
2 Libraries
91
10
1 Libraries
3
Trending Kits in Incremental Backup
No Trending Kits are available at this moment for Incremental Backup
Trending Discussions on Incremental Backup
Incremental backups (mariabackup) and dropped databases
Backup of svn repository to shared folder on NAS fails
Hard link to a symbolic link with Win32 API?
Rsync Incremental Backup still copies all the files
btrfs send / receive on incremental folders that rotate
How to copy(backup) Azure CosmosDB container to Azure blob storage?
Why am I getting "Agent error" when trying to perform a backup?
Difference between incremental backup and WAL archiving with PgBackRest
Can we configure Marklogic database backup on S3 bucket
Alternatives to Select-String
QUESTION
Incremental backups (mariabackup) and dropped databases
Asked 2022-Mar-28 at 05:39I tried to search for this case everywhere but, couldn't find anything that answers this - probably weird - question: What happens to the incremental backups taken from a mariadb server using mariabackup
if one of the databases is dropped?
Suppose you dropped one of the databases in a mariadb server, then you created an incremental backup afterwards, where the base full backup certainly includes the dropped database, does applying the incremental backup when preparing to restore include that removal? or that dropped database will still be present in the fully prepared backup?
PS: I realize that mariabackup uses InnoDB LSN to backup the changes / diffs only but, do those diffs include the removal of a table or a database?
My guess is that when preparing the incremental backup over the base, it would remove the tables and / or databases which are missing from the latest delta backups but, I might be wrong so, that's why I'm asking.
ANSWER
Answered 2022-Mar-28 at 05:39Well, after trying out the scenario I've found out that the dropped databases do exist in the full integral backup but, their tables are removed.
So, I think that a database structure changes are also included in the incremental backup e.g. modifications in table columns, foreign keys, indices, table creation and dropping etc.. are all tracked but, dropping the database itself is NOT tracked however, a dropped database will have all its tables missing from the result backup of integrating all incremental backups to the base one.
QUESTION
Backup of svn repository to shared folder on NAS fails
Asked 2022-Feb-07 at 14:08I want to set up an automatic incremental backup of my SVN repositories. Doing that to a local folder of the same PC seems to work, but if I try to write the files directly to a shared folder (I tried 2 different QNAP nas boxes) I get various errors, always a couple of hundred lines.
I tried
1svnadmin hotcopy C:\SVNRepos\RalfStein \\Nas\home\svn\RalfStein
2
After 202 lines it returned
svnadmin: E720059: Can't move '\oldnas\home\svn\RalfStein\db\revprops\svn-1AA28066' to '\oldnas\home\svn\RalfStein\db\revprops\203': Unerwarteter Netzwerkfehler.
("Unerwarteter Netzwerkfehler" means "unexpected network error".)
On a different nas it returned after 799 lines:
svnadmin: E720059: Can't close file '\nas\home\svn\RalfStein\db\revprops\svn-44790380': Unerwarteter Netzwerkfehler.
I've shut down the svn service, so there's no danger of someone working on svn in the meantime. Still same problem.
In don't want to use "dump" as it's not incremental. I can manually copy the entire repository and that seems to work. However, I'd like to understand where the problem is coming from.
System:
- VisualSVN V4.3.6 on Windows10
- svnadmin --version : 1.14.1
- The NAS (a QNAP TS453d) provides the SMB-share
ANSWER
Answered 2022-Feb-07 at 12:59- What protocol does your NAS use?
- Do you see errors when you run the
Backup-SvnRepository
PowerShell cmdlet? - What VisualSVN Server version and the version of SVN command-line tools are you using? I.e., what
svnadmin --version
says?
Note that you can consider the built-in Backup and Restore feature. It supports backup scheduling, encryption, incremental backups and backups to remote shares and Azure Files cloud. See KB106: Getting Started with Backup and Restore and KB137: Choosing backup destination.
I want to set up an automatic incremental backup of my SVN repositories. Doing that to a local folder of the same PC seems to work, but if I try to write the files directly to a shared folder (I tried 2 different QNAP nas boxes) I get various errors, always a couple of hundred lines.
From what I see, an unexpected network error indeed occurs when you hotcopy the repository onto your NAS. Please, double-check that you are using up-to-date Subversion command-line tools (what svnadmin --version
says?).
I've shut down the svn service, so there's no danger of someone working on svn in the meantime. Still same problem.
You don't need to stop the server's services when you run svnadmin hotcopy
:
[[[
This subcommand makes a “hot” backup of your repository, including all hooks, configuration files, and, of course, database files. You can run this command at any time and make a safe copy of the repository, regardless of whether other processes are using the repository.
]]]
QUESTION
Hard link to a symbolic link with Win32 API?
Asked 2021-Nov-26 at 08:59Quick background to this question as I'm sure it'll raise a few eyebrows: I'm developing a command line tool in C for making backups, and I am implementing incremental backups using NTFS hard links. Thus, if symbolic links exists in a prior backup, I must be able to point to the symbolic links themselves, not the target.
Unfortunately, the page for CreateHardLink clearly states:
Symbolic link behavior—If the path points to a symbolic link, the function creates a hard link to the target.
Now I'm stuck wondering, what's the solution to this? How can I create a hardlink that points to a symbolic link itself as opposed to the target? I did notice Windows' internal command MKLINK
appears to be able to create hardlinks to symlinks. So theoretically, I guess I could just use the system
function in C, but to be honest, it feels lazy and I tend to avoid it. Is there possibly a solution using only the Win32 API?
I also came across some code snippets from a Google developer ([1] [2]), with some details on the implementation of CreateHardLink
and whatnot, but it seemed a little too low level for me to make any real sense out of it. Also, (and I could be wrong about this) the functions provided in the GitHub repo seem to only be compatible with Windows 10 and later, but I'd hope to at least support Windows 7 as well.
ANSWER
Answered 2021-Nov-26 at 08:59CreateHardLink
create hard link to the symbolic link (reparse point) themselves, not to the target. so documentation simply is wrong. the lpExistingFileName
opened with option FILE_OPEN_REPARSE_POINT
so you can simply use CreateHardLink
as is and nothing more need todo. visa versa - if you want create hard link to target, you need custom implementation of CreateHardLink
and not use FILE_OPEN_REPARSE_POINT
(if you will use NtOpenFile
) or FILE_FLAG_OPEN_REPARSE_POINT
if you use CreatFileW
)
I did notice Windows' internal command MKLINK appears to be able to create hardlinks to symlinks.
if you debug cmd.exe with mklink
command you easy can notice that also simply CreateHardLinkW
api called (set breakpoint to it)
after you create hardlink to symlink file you can view in explorer that type of file is .symlink . for test we can remove link from target file ( by use FSCTL_DELETE_REPARSE_POINT
) if hardlink point to target - any operation with symlink not affect hardlink. but if we created hardlink to symlink intself - after break symlink - hard link will also be breaked:
1void TestCreateHardLink(PCWSTR lpFileName, PCWSTR lpSymlinkFileName, PCWSTR lpExistingFileName)
2{
3 if (CreateSymbolicLinkW(lpSymlinkFileName, lpExistingFileName, 0))
4 {
5 if (CreateHardLinkW(lpFileName, lpSymlinkFileName, 0))
6 {
7 HANDLE hFile = CreateFileW(lpSymlinkFileName, FILE_WRITE_ATTRIBUTES, FILE_SHARE_DELETE, 0, OPEN_EXISTING,
8 FILE_FLAG_OPEN_REPARSE_POINT, 0);
9
10 if (hFile != INVALID_HANDLE_VALUE)
11 {
12 REPARSE_DATA_BUFFER rdb = { IO_REPARSE_TAG_SYMLINK };
13 OVERLAPPED ov {};
14 if (DeviceIoControl(hFile, FSCTL_DELETE_REPARSE_POINT, &rdb, sizeof(rdb), 0, 0, 0, &ov))
15 {
16 MessageBoxW(0, 0, 0, 0);
17 }
18 CloseHandle(hFile);
19 }
20 DeleteFileW(lpFileName);
21 }
22 DeleteFileW(lpSymlinkFileName);
23 }
24}
25
we want more flexible implementation of hardlink create (set target) , can use next code:
1void TestCreateHardLink(PCWSTR lpFileName, PCWSTR lpSymlinkFileName, PCWSTR lpExistingFileName)
2{
3 if (CreateSymbolicLinkW(lpSymlinkFileName, lpExistingFileName, 0))
4 {
5 if (CreateHardLinkW(lpFileName, lpSymlinkFileName, 0))
6 {
7 HANDLE hFile = CreateFileW(lpSymlinkFileName, FILE_WRITE_ATTRIBUTES, FILE_SHARE_DELETE, 0, OPEN_EXISTING,
8 FILE_FLAG_OPEN_REPARSE_POINT, 0);
9
10 if (hFile != INVALID_HANDLE_VALUE)
11 {
12 REPARSE_DATA_BUFFER rdb = { IO_REPARSE_TAG_SYMLINK };
13 OVERLAPPED ov {};
14 if (DeviceIoControl(hFile, FSCTL_DELETE_REPARSE_POINT, &rdb, sizeof(rdb), 0, 0, 0, &ov))
15 {
16 MessageBoxW(0, 0, 0, 0);
17 }
18 CloseHandle(hFile);
19 }
20 DeleteFileW(lpFileName);
21 }
22 DeleteFileW(lpSymlinkFileName);
23 }
24}
25HRESULT CreateHardLinkExW(PCWSTR lpFileName, PCWSTR lpExistingFileName, BOOLEAN ReplaceIfExisting, BOOLEAN bToTarget)
26{
27 HANDLE hFile = CreateFileW(lpExistingFileName, 0, FILE_SHARE_READ|FILE_SHARE_WRITE, 0, OPEN_EXISTING,
28 bToTarget ? FILE_FLAG_BACKUP_SEMANTICS : FILE_FLAG_BACKUP_SEMANTICS|FILE_FLAG_OPEN_REPARSE_POINT, 0);
29
30 if (hFile == INVALID_HANDLE_VALUE)
31 {
32 return HRESULT_FROM_WIN32(GetLastError());
33 }
34
35 UNICODE_STRING NtName;
36 NTSTATUS status = RtlDosPathNameToNtPathName_U_WithStatus(lpFileName, &NtName, 0, 0);
37
38 if (0 <= status)
39 {
40 ULONG Length = FIELD_OFFSET(FILE_LINK_INFORMATION, FileName) + NtName.Length;
41
42 if (PFILE_LINK_INFORMATION LinkInfo = (PFILE_LINK_INFORMATION)_malloca(Length))
43 {
44 LinkInfo->ReplaceIfExists = ReplaceIfExisting;
45 LinkInfo->RootDirectory = 0;
46 LinkInfo->FileNameLength = NtName.Length;
47 memcpy(LinkInfo->FileName, NtName.Buffer, NtName.Length);
48 IO_STATUS_BLOCK iosb;
49 status = NtSetInformationFile(hFile, &iosb, LinkInfo, Length, FileLinkInformation);
50 }
51 else
52 {
53 status = STATUS_NO_MEMORY;
54 }
55
56 RtlFreeUnicodeString(&NtName);
57 }
58
59 CloseHandle(hFile);
60
61 return 0 > status ? HRESULT_FROM_NT(status) : S_OK;
62}
63
QUESTION
Rsync Incremental Backup still copies all the files
Asked 2021-Nov-21 at 13:50I am currently writing a bash script for rsync. I am pretty sure I am doing something wrong. But I can't tell what it is. I will try to elaborate everything in detail so hopefully someone can help me.
The goal of script is to do full backups and incremental ones using rsync. Everything seems to work perfectly well, besides one crucial thing. It seems like even though using the --link-dest
parameter, it still copies all the files. I have checked the file sizes with du -chs
.
First here is my script:
1#!/bin/sh
2while getopts m:p: flags
3do
4 case "$flags" in
5 m) mode=${OPTARG};;
6 p) prev=${OPTARG};;
7 *) echo "usage: $0 [-m] [-p]" >&2
8 exit 1 ;;
9 esac
10done
11
12date="$(date '+%Y-%m-%d')";
13
14
15#Create Folders If They Do Not Exist (-p paramter)
16mkdir -p /Backups/Full && mkdir -p /Backups/Inc
17
18FullBackup() {
19 #Backup Content Of Website
20 mkdir -p /Backups/Full/$date/Web/html
21 rsync -av user@IP:/var/www/html/ /Backups/Full/$date/Web/html/
22
23 #Backup All Config Files NEEDED. Saving Storage Is Key ;)
24 mkdir -p /Backups/Full/$date/Web/etc
25 rsync -av user@IP:/etc/apache2/ /Backups/Full/$date/Web/etc/
26
27 #Backup Fileserver
28 mkdir -p /Backups/Full/$date/Fileserver
29 rsync -av user@IP:/srv/samba/private/ /Backups/Full/$date/Fileserver/
30
31 #Backup MongoDB
32 ssh user@IP /usr/bin/mongodump --out /home/DB
33 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
34 ssh user@IP rm -rf /home/DB
35}
36
37IncrementalBackup(){
38 Method="";
39 if [ "$prev" == "full" ]
40 then
41 Method="Full";
42 elif [ "$prev" == "inc" ]
43 then
44 Method="Inc";
45 fi
46
47 if [ -z "$prev" ]
48 then
49 echo "-p Parameter Empty";
50 else
51 #Get Latest Folder - Ignore the hacky method, it works.
52 cd /Backups/$Method
53 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
54 IFS='/'
55 read -a strarr <<< "$NewestBackup"
56 Latest_Backup="${strarr[0]}";
57 cd /Backups/
58
59 #Incremental-Backup Content Of Website
60 mkdir -p /Backups/Inc/$date/Web/html
61 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/html/ user@IP:/var/www/html/ /Backups/Inc/$date/Web/html/
62
63 #Incremental-Backup All Config Files NEEDED
64 mkdir -p /Backups/Inc/$date/Web/etc
65 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/etc/ user@IP:/etc/apache2/ /Backups/Inc/$date/Web/etc/
66
67 #Incremental-Backup Fileserver
68 mkdir -p /Backups/Inc/$date/Fileserver
69 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Fileserver/ user@IP:/srv/samba/private/ /Backups/Inc/$date/Fileserver/
70
71 #Backup MongoDB
72 ssh user@IP /usr/bin/mongodump --out /home/DB
73 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
74 ssh user@IP rm -rf /home/DB
75 fi
76}
77
78if [ "$mode" == "full" ]
79then
80 FullBackup;
81elif [ "$mode" == "inc" ]
82then
83 IncrementalBackup;
84fi
85
The command i used:
Full-Backup
bash script.sh -m full
Incremental
bash script.sh -m inc -p full
Executing the script is not giving any errors at all. As I mentioned above, it just seems like it's still copying all the files. Here are some tests I did.
Output of du -chs
1#!/bin/sh
2while getopts m:p: flags
3do
4 case "$flags" in
5 m) mode=${OPTARG};;
6 p) prev=${OPTARG};;
7 *) echo "usage: $0 [-m] [-p]" >&2
8 exit 1 ;;
9 esac
10done
11
12date="$(date '+%Y-%m-%d')";
13
14
15#Create Folders If They Do Not Exist (-p paramter)
16mkdir -p /Backups/Full && mkdir -p /Backups/Inc
17
18FullBackup() {
19 #Backup Content Of Website
20 mkdir -p /Backups/Full/$date/Web/html
21 rsync -av user@IP:/var/www/html/ /Backups/Full/$date/Web/html/
22
23 #Backup All Config Files NEEDED. Saving Storage Is Key ;)
24 mkdir -p /Backups/Full/$date/Web/etc
25 rsync -av user@IP:/etc/apache2/ /Backups/Full/$date/Web/etc/
26
27 #Backup Fileserver
28 mkdir -p /Backups/Full/$date/Fileserver
29 rsync -av user@IP:/srv/samba/private/ /Backups/Full/$date/Fileserver/
30
31 #Backup MongoDB
32 ssh user@IP /usr/bin/mongodump --out /home/DB
33 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
34 ssh user@IP rm -rf /home/DB
35}
36
37IncrementalBackup(){
38 Method="";
39 if [ "$prev" == "full" ]
40 then
41 Method="Full";
42 elif [ "$prev" == "inc" ]
43 then
44 Method="Inc";
45 fi
46
47 if [ -z "$prev" ]
48 then
49 echo "-p Parameter Empty";
50 else
51 #Get Latest Folder - Ignore the hacky method, it works.
52 cd /Backups/$Method
53 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
54 IFS='/'
55 read -a strarr <<< "$NewestBackup"
56 Latest_Backup="${strarr[0]}";
57 cd /Backups/
58
59 #Incremental-Backup Content Of Website
60 mkdir -p /Backups/Inc/$date/Web/html
61 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/html/ user@IP:/var/www/html/ /Backups/Inc/$date/Web/html/
62
63 #Incremental-Backup All Config Files NEEDED
64 mkdir -p /Backups/Inc/$date/Web/etc
65 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/etc/ user@IP:/etc/apache2/ /Backups/Inc/$date/Web/etc/
66
67 #Incremental-Backup Fileserver
68 mkdir -p /Backups/Inc/$date/Fileserver
69 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Fileserver/ user@IP:/srv/samba/private/ /Backups/Inc/$date/Fileserver/
70
71 #Backup MongoDB
72 ssh user@IP /usr/bin/mongodump --out /home/DB
73 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
74 ssh user@IP rm -rf /home/DB
75 fi
76}
77
78if [ "$mode" == "full" ]
79then
80 FullBackup;
81elif [ "$mode" == "inc" ]
82then
83 IncrementalBackup;
84fi
85root@Backup:/Backups# du -chs /Backups/Full/2021-11-20/*
8636K /Backups/Full/2021-11-20/DB
876.5M /Backups/Full/2021-11-20/Fileserver
88696K /Backups/Full/2021-11-20/Web
897.2M total
90root@Backup:/Backups# du -chs /Backups/Inc/2021-11-20/*
9136K /Backups/Inc/2021-11-20/DB
926.5M /Backups/Inc/2021-11-20/Fileserver
93696K /Backups/Inc/2021-11-20/Web
947.2M total
95
96
Output of ls -li
1#!/bin/sh
2while getopts m:p: flags
3do
4 case "$flags" in
5 m) mode=${OPTARG};;
6 p) prev=${OPTARG};;
7 *) echo "usage: $0 [-m] [-p]" >&2
8 exit 1 ;;
9 esac
10done
11
12date="$(date '+%Y-%m-%d')";
13
14
15#Create Folders If They Do Not Exist (-p paramter)
16mkdir -p /Backups/Full && mkdir -p /Backups/Inc
17
18FullBackup() {
19 #Backup Content Of Website
20 mkdir -p /Backups/Full/$date/Web/html
21 rsync -av user@IP:/var/www/html/ /Backups/Full/$date/Web/html/
22
23 #Backup All Config Files NEEDED. Saving Storage Is Key ;)
24 mkdir -p /Backups/Full/$date/Web/etc
25 rsync -av user@IP:/etc/apache2/ /Backups/Full/$date/Web/etc/
26
27 #Backup Fileserver
28 mkdir -p /Backups/Full/$date/Fileserver
29 rsync -av user@IP:/srv/samba/private/ /Backups/Full/$date/Fileserver/
30
31 #Backup MongoDB
32 ssh user@IP /usr/bin/mongodump --out /home/DB
33 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
34 ssh user@IP rm -rf /home/DB
35}
36
37IncrementalBackup(){
38 Method="";
39 if [ "$prev" == "full" ]
40 then
41 Method="Full";
42 elif [ "$prev" == "inc" ]
43 then
44 Method="Inc";
45 fi
46
47 if [ -z "$prev" ]
48 then
49 echo "-p Parameter Empty";
50 else
51 #Get Latest Folder - Ignore the hacky method, it works.
52 cd /Backups/$Method
53 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
54 IFS='/'
55 read -a strarr <<< "$NewestBackup"
56 Latest_Backup="${strarr[0]}";
57 cd /Backups/
58
59 #Incremental-Backup Content Of Website
60 mkdir -p /Backups/Inc/$date/Web/html
61 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/html/ user@IP:/var/www/html/ /Backups/Inc/$date/Web/html/
62
63 #Incremental-Backup All Config Files NEEDED
64 mkdir -p /Backups/Inc/$date/Web/etc
65 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/etc/ user@IP:/etc/apache2/ /Backups/Inc/$date/Web/etc/
66
67 #Incremental-Backup Fileserver
68 mkdir -p /Backups/Inc/$date/Fileserver
69 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Fileserver/ user@IP:/srv/samba/private/ /Backups/Inc/$date/Fileserver/
70
71 #Backup MongoDB
72 ssh user@IP /usr/bin/mongodump --out /home/DB
73 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
74 ssh user@IP rm -rf /home/DB
75 fi
76}
77
78if [ "$mode" == "full" ]
79then
80 FullBackup;
81elif [ "$mode" == "inc" ]
82then
83 IncrementalBackup;
84fi
85root@Backup:/Backups# du -chs /Backups/Full/2021-11-20/*
8636K /Backups/Full/2021-11-20/DB
876.5M /Backups/Full/2021-11-20/Fileserver
88696K /Backups/Full/2021-11-20/Web
897.2M total
90root@Backup:/Backups# du -chs /Backups/Inc/2021-11-20/*
9136K /Backups/Inc/2021-11-20/DB
926.5M /Backups/Inc/2021-11-20/Fileserver
93696K /Backups/Inc/2021-11-20/Web
947.2M total
95
96root@Backup:/Backups# ls -li /Backups/Full/2021-11-20/
97total 12
981290476 drwxr-xr-x 4 root root 4096 Nov 20 19:26 DB
991290445 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1001290246 drwxr-xr-x 4 root root 4096 Nov 20 19:26 Web
101root@Backup:/Backups# ls -li /Backups/Inc/2021-11-20/
102total 12
1031290506 drwxr-xr-x 4 root root 4096 Nov 20 19:28 DB
1041290496 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1051290486 drwxr-xr-x 4 root root 4096 Nov 20 19:28 Web
106
Rsync Output when doing the incremental backup and changing/adding a file
1#!/bin/sh
2while getopts m:p: flags
3do
4 case "$flags" in
5 m) mode=${OPTARG};;
6 p) prev=${OPTARG};;
7 *) echo "usage: $0 [-m] [-p]" >&2
8 exit 1 ;;
9 esac
10done
11
12date="$(date '+%Y-%m-%d')";
13
14
15#Create Folders If They Do Not Exist (-p paramter)
16mkdir -p /Backups/Full && mkdir -p /Backups/Inc
17
18FullBackup() {
19 #Backup Content Of Website
20 mkdir -p /Backups/Full/$date/Web/html
21 rsync -av user@IP:/var/www/html/ /Backups/Full/$date/Web/html/
22
23 #Backup All Config Files NEEDED. Saving Storage Is Key ;)
24 mkdir -p /Backups/Full/$date/Web/etc
25 rsync -av user@IP:/etc/apache2/ /Backups/Full/$date/Web/etc/
26
27 #Backup Fileserver
28 mkdir -p /Backups/Full/$date/Fileserver
29 rsync -av user@IP:/srv/samba/private/ /Backups/Full/$date/Fileserver/
30
31 #Backup MongoDB
32 ssh user@IP /usr/bin/mongodump --out /home/DB
33 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
34 ssh user@IP rm -rf /home/DB
35}
36
37IncrementalBackup(){
38 Method="";
39 if [ "$prev" == "full" ]
40 then
41 Method="Full";
42 elif [ "$prev" == "inc" ]
43 then
44 Method="Inc";
45 fi
46
47 if [ -z "$prev" ]
48 then
49 echo "-p Parameter Empty";
50 else
51 #Get Latest Folder - Ignore the hacky method, it works.
52 cd /Backups/$Method
53 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
54 IFS='/'
55 read -a strarr <<< "$NewestBackup"
56 Latest_Backup="${strarr[0]}";
57 cd /Backups/
58
59 #Incremental-Backup Content Of Website
60 mkdir -p /Backups/Inc/$date/Web/html
61 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/html/ user@IP:/var/www/html/ /Backups/Inc/$date/Web/html/
62
63 #Incremental-Backup All Config Files NEEDED
64 mkdir -p /Backups/Inc/$date/Web/etc
65 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/etc/ user@IP:/etc/apache2/ /Backups/Inc/$date/Web/etc/
66
67 #Incremental-Backup Fileserver
68 mkdir -p /Backups/Inc/$date/Fileserver
69 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Fileserver/ user@IP:/srv/samba/private/ /Backups/Inc/$date/Fileserver/
70
71 #Backup MongoDB
72 ssh user@IP /usr/bin/mongodump --out /home/DB
73 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
74 ssh user@IP rm -rf /home/DB
75 fi
76}
77
78if [ "$mode" == "full" ]
79then
80 FullBackup;
81elif [ "$mode" == "inc" ]
82then
83 IncrementalBackup;
84fi
85root@Backup:/Backups# du -chs /Backups/Full/2021-11-20/*
8636K /Backups/Full/2021-11-20/DB
876.5M /Backups/Full/2021-11-20/Fileserver
88696K /Backups/Full/2021-11-20/Web
897.2M total
90root@Backup:/Backups# du -chs /Backups/Inc/2021-11-20/*
9136K /Backups/Inc/2021-11-20/DB
926.5M /Backups/Inc/2021-11-20/Fileserver
93696K /Backups/Inc/2021-11-20/Web
947.2M total
95
96root@Backup:/Backups# ls -li /Backups/Full/2021-11-20/
97total 12
981290476 drwxr-xr-x 4 root root 4096 Nov 20 19:26 DB
991290445 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1001290246 drwxr-xr-x 4 root root 4096 Nov 20 19:26 Web
101root@Backup:/Backups# ls -li /Backups/Inc/2021-11-20/
102total 12
1031290506 drwxr-xr-x 4 root root 4096 Nov 20 19:28 DB
1041290496 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1051290486 drwxr-xr-x 4 root root 4096 Nov 20 19:28 Web
106receiving incremental file list
107./
108lol.html
109
110sent 53 bytes received 194 bytes 164.67 bytes/sec
111total size is 606 speedup is 2.45
112receiving incremental file list
113./
114
115sent 33 bytes received 5,468 bytes 11,002.00 bytes/sec
116total size is 93,851 speedup is 17.06
117receiving incremental file list
118./
119
120sent 36 bytes received 1,105 bytes 760.67 bytes/sec
121total size is 6,688,227 speedup is 5,861.72
122*Irrelevant MongoDB Dump Text*
123
124sent 146 bytes received 2,671 bytes 1,878.00 bytes/sec
125total size is 2,163 speedup is 0.77
126
I suspect that the ./
has something to do with that. I might be wrong, but it looks suspicious. Though when executing the same command again, the ./
are not in the log, probably because I did it on the same day, so it was overwriting in the /Backup/Inc/2021-11-20
Folder.
Let me know for more information. I have been trying around for a long time now. Maybe I am simply wrong and there are links made and disk space economized.
ANSWER
Answered 2021-Nov-21 at 13:50I didn't read the entire code because the main problem didn't seem to lay there.
Verify the disk usage of your /Backups
directory with du -sh /Backups
and then compare it with the sum of du -sh /Backups/Full
and du -sh /Backups/Inc
.
I'll show you why with a little test:
Create a directory containing a file of 1 MiB:
1#!/bin/sh
2while getopts m:p: flags
3do
4 case "$flags" in
5 m) mode=${OPTARG};;
6 p) prev=${OPTARG};;
7 *) echo "usage: $0 [-m] [-p]" >&2
8 exit 1 ;;
9 esac
10done
11
12date="$(date '+%Y-%m-%d')";
13
14
15#Create Folders If They Do Not Exist (-p paramter)
16mkdir -p /Backups/Full && mkdir -p /Backups/Inc
17
18FullBackup() {
19 #Backup Content Of Website
20 mkdir -p /Backups/Full/$date/Web/html
21 rsync -av user@IP:/var/www/html/ /Backups/Full/$date/Web/html/
22
23 #Backup All Config Files NEEDED. Saving Storage Is Key ;)
24 mkdir -p /Backups/Full/$date/Web/etc
25 rsync -av user@IP:/etc/apache2/ /Backups/Full/$date/Web/etc/
26
27 #Backup Fileserver
28 mkdir -p /Backups/Full/$date/Fileserver
29 rsync -av user@IP:/srv/samba/private/ /Backups/Full/$date/Fileserver/
30
31 #Backup MongoDB
32 ssh user@IP /usr/bin/mongodump --out /home/DB
33 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
34 ssh user@IP rm -rf /home/DB
35}
36
37IncrementalBackup(){
38 Method="";
39 if [ "$prev" == "full" ]
40 then
41 Method="Full";
42 elif [ "$prev" == "inc" ]
43 then
44 Method="Inc";
45 fi
46
47 if [ -z "$prev" ]
48 then
49 echo "-p Parameter Empty";
50 else
51 #Get Latest Folder - Ignore the hacky method, it works.
52 cd /Backups/$Method
53 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
54 IFS='/'
55 read -a strarr <<< "$NewestBackup"
56 Latest_Backup="${strarr[0]}";
57 cd /Backups/
58
59 #Incremental-Backup Content Of Website
60 mkdir -p /Backups/Inc/$date/Web/html
61 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/html/ user@IP:/var/www/html/ /Backups/Inc/$date/Web/html/
62
63 #Incremental-Backup All Config Files NEEDED
64 mkdir -p /Backups/Inc/$date/Web/etc
65 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/etc/ user@IP:/etc/apache2/ /Backups/Inc/$date/Web/etc/
66
67 #Incremental-Backup Fileserver
68 mkdir -p /Backups/Inc/$date/Fileserver
69 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Fileserver/ user@IP:/srv/samba/private/ /Backups/Inc/$date/Fileserver/
70
71 #Backup MongoDB
72 ssh user@IP /usr/bin/mongodump --out /home/DB
73 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
74 ssh user@IP rm -rf /home/DB
75 fi
76}
77
78if [ "$mode" == "full" ]
79then
80 FullBackup;
81elif [ "$mode" == "inc" ]
82then
83 IncrementalBackup;
84fi
85root@Backup:/Backups# du -chs /Backups/Full/2021-11-20/*
8636K /Backups/Full/2021-11-20/DB
876.5M /Backups/Full/2021-11-20/Fileserver
88696K /Backups/Full/2021-11-20/Web
897.2M total
90root@Backup:/Backups# du -chs /Backups/Inc/2021-11-20/*
9136K /Backups/Inc/2021-11-20/DB
926.5M /Backups/Inc/2021-11-20/Fileserver
93696K /Backups/Inc/2021-11-20/Web
947.2M total
95
96root@Backup:/Backups# ls -li /Backups/Full/2021-11-20/
97total 12
981290476 drwxr-xr-x 4 root root 4096 Nov 20 19:26 DB
991290445 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1001290246 drwxr-xr-x 4 root root 4096 Nov 20 19:26 Web
101root@Backup:/Backups# ls -li /Backups/Inc/2021-11-20/
102total 12
1031290506 drwxr-xr-x 4 root root 4096 Nov 20 19:28 DB
1041290496 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1051290486 drwxr-xr-x 4 root root 4096 Nov 20 19:28 Web
106receiving incremental file list
107./
108lol.html
109
110sent 53 bytes received 194 bytes 164.67 bytes/sec
111total size is 606 speedup is 2.45
112receiving incremental file list
113./
114
115sent 33 bytes received 5,468 bytes 11,002.00 bytes/sec
116total size is 93,851 speedup is 17.06
117receiving incremental file list
118./
119
120sent 36 bytes received 1,105 bytes 760.67 bytes/sec
121total size is 6,688,227 speedup is 5,861.72
122*Irrelevant MongoDB Dump Text*
123
124sent 146 bytes received 2,671 bytes 1,878.00 bytes/sec
125total size is 2,163 speedup is 0.77
126mkdir -p /tmp/example/data
127
128dd if=/dev/zero of=/tmp/example/data/zerofile bs=1M count=1
129
Do a "full" backup:
1#!/bin/sh
2while getopts m:p: flags
3do
4 case "$flags" in
5 m) mode=${OPTARG};;
6 p) prev=${OPTARG};;
7 *) echo "usage: $0 [-m] [-p]" >&2
8 exit 1 ;;
9 esac
10done
11
12date="$(date '+%Y-%m-%d')";
13
14
15#Create Folders If They Do Not Exist (-p paramter)
16mkdir -p /Backups/Full && mkdir -p /Backups/Inc
17
18FullBackup() {
19 #Backup Content Of Website
20 mkdir -p /Backups/Full/$date/Web/html
21 rsync -av user@IP:/var/www/html/ /Backups/Full/$date/Web/html/
22
23 #Backup All Config Files NEEDED. Saving Storage Is Key ;)
24 mkdir -p /Backups/Full/$date/Web/etc
25 rsync -av user@IP:/etc/apache2/ /Backups/Full/$date/Web/etc/
26
27 #Backup Fileserver
28 mkdir -p /Backups/Full/$date/Fileserver
29 rsync -av user@IP:/srv/samba/private/ /Backups/Full/$date/Fileserver/
30
31 #Backup MongoDB
32 ssh user@IP /usr/bin/mongodump --out /home/DB
33 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
34 ssh user@IP rm -rf /home/DB
35}
36
37IncrementalBackup(){
38 Method="";
39 if [ "$prev" == "full" ]
40 then
41 Method="Full";
42 elif [ "$prev" == "inc" ]
43 then
44 Method="Inc";
45 fi
46
47 if [ -z "$prev" ]
48 then
49 echo "-p Parameter Empty";
50 else
51 #Get Latest Folder - Ignore the hacky method, it works.
52 cd /Backups/$Method
53 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
54 IFS='/'
55 read -a strarr <<< "$NewestBackup"
56 Latest_Backup="${strarr[0]}";
57 cd /Backups/
58
59 #Incremental-Backup Content Of Website
60 mkdir -p /Backups/Inc/$date/Web/html
61 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/html/ user@IP:/var/www/html/ /Backups/Inc/$date/Web/html/
62
63 #Incremental-Backup All Config Files NEEDED
64 mkdir -p /Backups/Inc/$date/Web/etc
65 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/etc/ user@IP:/etc/apache2/ /Backups/Inc/$date/Web/etc/
66
67 #Incremental-Backup Fileserver
68 mkdir -p /Backups/Inc/$date/Fileserver
69 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Fileserver/ user@IP:/srv/samba/private/ /Backups/Inc/$date/Fileserver/
70
71 #Backup MongoDB
72 ssh user@IP /usr/bin/mongodump --out /home/DB
73 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
74 ssh user@IP rm -rf /home/DB
75 fi
76}
77
78if [ "$mode" == "full" ]
79then
80 FullBackup;
81elif [ "$mode" == "inc" ]
82then
83 IncrementalBackup;
84fi
85root@Backup:/Backups# du -chs /Backups/Full/2021-11-20/*
8636K /Backups/Full/2021-11-20/DB
876.5M /Backups/Full/2021-11-20/Fileserver
88696K /Backups/Full/2021-11-20/Web
897.2M total
90root@Backup:/Backups# du -chs /Backups/Inc/2021-11-20/*
9136K /Backups/Inc/2021-11-20/DB
926.5M /Backups/Inc/2021-11-20/Fileserver
93696K /Backups/Inc/2021-11-20/Web
947.2M total
95
96root@Backup:/Backups# ls -li /Backups/Full/2021-11-20/
97total 12
981290476 drwxr-xr-x 4 root root 4096 Nov 20 19:26 DB
991290445 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1001290246 drwxr-xr-x 4 root root 4096 Nov 20 19:26 Web
101root@Backup:/Backups# ls -li /Backups/Inc/2021-11-20/
102total 12
1031290506 drwxr-xr-x 4 root root 4096 Nov 20 19:28 DB
1041290496 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1051290486 drwxr-xr-x 4 root root 4096 Nov 20 19:28 Web
106receiving incremental file list
107./
108lol.html
109
110sent 53 bytes received 194 bytes 164.67 bytes/sec
111total size is 606 speedup is 2.45
112receiving incremental file list
113./
114
115sent 33 bytes received 5,468 bytes 11,002.00 bytes/sec
116total size is 93,851 speedup is 17.06
117receiving incremental file list
118./
119
120sent 36 bytes received 1,105 bytes 760.67 bytes/sec
121total size is 6,688,227 speedup is 5,861.72
122*Irrelevant MongoDB Dump Text*
123
124sent 146 bytes received 2,671 bytes 1,878.00 bytes/sec
125total size is 2,163 speedup is 0.77
126mkdir -p /tmp/example/data
127
128dd if=/dev/zero of=/tmp/example/data/zerofile bs=1M count=1
129rsync -av /tmp/example/data/ /tmp/example/full
130
Do an "incremental" backup
1#!/bin/sh
2while getopts m:p: flags
3do
4 case "$flags" in
5 m) mode=${OPTARG};;
6 p) prev=${OPTARG};;
7 *) echo "usage: $0 [-m] [-p]" >&2
8 exit 1 ;;
9 esac
10done
11
12date="$(date '+%Y-%m-%d')";
13
14
15#Create Folders If They Do Not Exist (-p paramter)
16mkdir -p /Backups/Full && mkdir -p /Backups/Inc
17
18FullBackup() {
19 #Backup Content Of Website
20 mkdir -p /Backups/Full/$date/Web/html
21 rsync -av user@IP:/var/www/html/ /Backups/Full/$date/Web/html/
22
23 #Backup All Config Files NEEDED. Saving Storage Is Key ;)
24 mkdir -p /Backups/Full/$date/Web/etc
25 rsync -av user@IP:/etc/apache2/ /Backups/Full/$date/Web/etc/
26
27 #Backup Fileserver
28 mkdir -p /Backups/Full/$date/Fileserver
29 rsync -av user@IP:/srv/samba/private/ /Backups/Full/$date/Fileserver/
30
31 #Backup MongoDB
32 ssh user@IP /usr/bin/mongodump --out /home/DB
33 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
34 ssh user@IP rm -rf /home/DB
35}
36
37IncrementalBackup(){
38 Method="";
39 if [ "$prev" == "full" ]
40 then
41 Method="Full";
42 elif [ "$prev" == "inc" ]
43 then
44 Method="Inc";
45 fi
46
47 if [ -z "$prev" ]
48 then
49 echo "-p Parameter Empty";
50 else
51 #Get Latest Folder - Ignore the hacky method, it works.
52 cd /Backups/$Method
53 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
54 IFS='/'
55 read -a strarr <<< "$NewestBackup"
56 Latest_Backup="${strarr[0]}";
57 cd /Backups/
58
59 #Incremental-Backup Content Of Website
60 mkdir -p /Backups/Inc/$date/Web/html
61 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/html/ user@IP:/var/www/html/ /Backups/Inc/$date/Web/html/
62
63 #Incremental-Backup All Config Files NEEDED
64 mkdir -p /Backups/Inc/$date/Web/etc
65 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/etc/ user@IP:/etc/apache2/ /Backups/Inc/$date/Web/etc/
66
67 #Incremental-Backup Fileserver
68 mkdir -p /Backups/Inc/$date/Fileserver
69 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Fileserver/ user@IP:/srv/samba/private/ /Backups/Inc/$date/Fileserver/
70
71 #Backup MongoDB
72 ssh user@IP /usr/bin/mongodump --out /home/DB
73 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
74 ssh user@IP rm -rf /home/DB
75 fi
76}
77
78if [ "$mode" == "full" ]
79then
80 FullBackup;
81elif [ "$mode" == "inc" ]
82then
83 IncrementalBackup;
84fi
85root@Backup:/Backups# du -chs /Backups/Full/2021-11-20/*
8636K /Backups/Full/2021-11-20/DB
876.5M /Backups/Full/2021-11-20/Fileserver
88696K /Backups/Full/2021-11-20/Web
897.2M total
90root@Backup:/Backups# du -chs /Backups/Inc/2021-11-20/*
9136K /Backups/Inc/2021-11-20/DB
926.5M /Backups/Inc/2021-11-20/Fileserver
93696K /Backups/Inc/2021-11-20/Web
947.2M total
95
96root@Backup:/Backups# ls -li /Backups/Full/2021-11-20/
97total 12
981290476 drwxr-xr-x 4 root root 4096 Nov 20 19:26 DB
991290445 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1001290246 drwxr-xr-x 4 root root 4096 Nov 20 19:26 Web
101root@Backup:/Backups# ls -li /Backups/Inc/2021-11-20/
102total 12
1031290506 drwxr-xr-x 4 root root 4096 Nov 20 19:28 DB
1041290496 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1051290486 drwxr-xr-x 4 root root 4096 Nov 20 19:28 Web
106receiving incremental file list
107./
108lol.html
109
110sent 53 bytes received 194 bytes 164.67 bytes/sec
111total size is 606 speedup is 2.45
112receiving incremental file list
113./
114
115sent 33 bytes received 5,468 bytes 11,002.00 bytes/sec
116total size is 93,851 speedup is 17.06
117receiving incremental file list
118./
119
120sent 36 bytes received 1,105 bytes 760.67 bytes/sec
121total size is 6,688,227 speedup is 5,861.72
122*Irrelevant MongoDB Dump Text*
123
124sent 146 bytes received 2,671 bytes 1,878.00 bytes/sec
125total size is 2,163 speedup is 0.77
126mkdir -p /tmp/example/data
127
128dd if=/dev/zero of=/tmp/example/data/zerofile bs=1M count=1
129rsync -av /tmp/example/data/ /tmp/example/full
130rsync -av --link-dest=/tmp/example/full /tmp/example/data/ /tmp/example/incr
131
Now let's see what we got:
with ls -l
1#!/bin/sh
2while getopts m:p: flags
3do
4 case "$flags" in
5 m) mode=${OPTARG};;
6 p) prev=${OPTARG};;
7 *) echo "usage: $0 [-m] [-p]" >&2
8 exit 1 ;;
9 esac
10done
11
12date="$(date '+%Y-%m-%d')";
13
14
15#Create Folders If They Do Not Exist (-p paramter)
16mkdir -p /Backups/Full && mkdir -p /Backups/Inc
17
18FullBackup() {
19 #Backup Content Of Website
20 mkdir -p /Backups/Full/$date/Web/html
21 rsync -av user@IP:/var/www/html/ /Backups/Full/$date/Web/html/
22
23 #Backup All Config Files NEEDED. Saving Storage Is Key ;)
24 mkdir -p /Backups/Full/$date/Web/etc
25 rsync -av user@IP:/etc/apache2/ /Backups/Full/$date/Web/etc/
26
27 #Backup Fileserver
28 mkdir -p /Backups/Full/$date/Fileserver
29 rsync -av user@IP:/srv/samba/private/ /Backups/Full/$date/Fileserver/
30
31 #Backup MongoDB
32 ssh user@IP /usr/bin/mongodump --out /home/DB
33 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
34 ssh user@IP rm -rf /home/DB
35}
36
37IncrementalBackup(){
38 Method="";
39 if [ "$prev" == "full" ]
40 then
41 Method="Full";
42 elif [ "$prev" == "inc" ]
43 then
44 Method="Inc";
45 fi
46
47 if [ -z "$prev" ]
48 then
49 echo "-p Parameter Empty";
50 else
51 #Get Latest Folder - Ignore the hacky method, it works.
52 cd /Backups/$Method
53 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
54 IFS='/'
55 read -a strarr <<< "$NewestBackup"
56 Latest_Backup="${strarr[0]}";
57 cd /Backups/
58
59 #Incremental-Backup Content Of Website
60 mkdir -p /Backups/Inc/$date/Web/html
61 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/html/ user@IP:/var/www/html/ /Backups/Inc/$date/Web/html/
62
63 #Incremental-Backup All Config Files NEEDED
64 mkdir -p /Backups/Inc/$date/Web/etc
65 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/etc/ user@IP:/etc/apache2/ /Backups/Inc/$date/Web/etc/
66
67 #Incremental-Backup Fileserver
68 mkdir -p /Backups/Inc/$date/Fileserver
69 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Fileserver/ user@IP:/srv/samba/private/ /Backups/Inc/$date/Fileserver/
70
71 #Backup MongoDB
72 ssh user@IP /usr/bin/mongodump --out /home/DB
73 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
74 ssh user@IP rm -rf /home/DB
75 fi
76}
77
78if [ "$mode" == "full" ]
79then
80 FullBackup;
81elif [ "$mode" == "inc" ]
82then
83 IncrementalBackup;
84fi
85root@Backup:/Backups# du -chs /Backups/Full/2021-11-20/*
8636K /Backups/Full/2021-11-20/DB
876.5M /Backups/Full/2021-11-20/Fileserver
88696K /Backups/Full/2021-11-20/Web
897.2M total
90root@Backup:/Backups# du -chs /Backups/Inc/2021-11-20/*
9136K /Backups/Inc/2021-11-20/DB
926.5M /Backups/Inc/2021-11-20/Fileserver
93696K /Backups/Inc/2021-11-20/Web
947.2M total
95
96root@Backup:/Backups# ls -li /Backups/Full/2021-11-20/
97total 12
981290476 drwxr-xr-x 4 root root 4096 Nov 20 19:26 DB
991290445 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1001290246 drwxr-xr-x 4 root root 4096 Nov 20 19:26 Web
101root@Backup:/Backups# ls -li /Backups/Inc/2021-11-20/
102total 12
1031290506 drwxr-xr-x 4 root root 4096 Nov 20 19:28 DB
1041290496 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1051290486 drwxr-xr-x 4 root root 4096 Nov 20 19:28 Web
106receiving incremental file list
107./
108lol.html
109
110sent 53 bytes received 194 bytes 164.67 bytes/sec
111total size is 606 speedup is 2.45
112receiving incremental file list
113./
114
115sent 33 bytes received 5,468 bytes 11,002.00 bytes/sec
116total size is 93,851 speedup is 17.06
117receiving incremental file list
118./
119
120sent 36 bytes received 1,105 bytes 760.67 bytes/sec
121total size is 6,688,227 speedup is 5,861.72
122*Irrelevant MongoDB Dump Text*
123
124sent 146 bytes received 2,671 bytes 1,878.00 bytes/sec
125total size is 2,163 speedup is 0.77
126mkdir -p /tmp/example/data
127
128dd if=/dev/zero of=/tmp/example/data/zerofile bs=1M count=1
129rsync -av /tmp/example/data/ /tmp/example/full
130rsync -av --link-dest=/tmp/example/full /tmp/example/data/ /tmp/example/incr
131ls -l /tmp/example/*
132-rw-rw-r-- 1 user group 1048576 Nov 21 00:24 /tmp/example/data/zerofile
133-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/full/zerofile
134-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/incr/zerofile
135
and with du -sh
1#!/bin/sh
2while getopts m:p: flags
3do
4 case "$flags" in
5 m) mode=${OPTARG};;
6 p) prev=${OPTARG};;
7 *) echo "usage: $0 [-m] [-p]" >&2
8 exit 1 ;;
9 esac
10done
11
12date="$(date '+%Y-%m-%d')";
13
14
15#Create Folders If They Do Not Exist (-p paramter)
16mkdir -p /Backups/Full && mkdir -p /Backups/Inc
17
18FullBackup() {
19 #Backup Content Of Website
20 mkdir -p /Backups/Full/$date/Web/html
21 rsync -av user@IP:/var/www/html/ /Backups/Full/$date/Web/html/
22
23 #Backup All Config Files NEEDED. Saving Storage Is Key ;)
24 mkdir -p /Backups/Full/$date/Web/etc
25 rsync -av user@IP:/etc/apache2/ /Backups/Full/$date/Web/etc/
26
27 #Backup Fileserver
28 mkdir -p /Backups/Full/$date/Fileserver
29 rsync -av user@IP:/srv/samba/private/ /Backups/Full/$date/Fileserver/
30
31 #Backup MongoDB
32 ssh user@IP /usr/bin/mongodump --out /home/DB
33 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
34 ssh user@IP rm -rf /home/DB
35}
36
37IncrementalBackup(){
38 Method="";
39 if [ "$prev" == "full" ]
40 then
41 Method="Full";
42 elif [ "$prev" == "inc" ]
43 then
44 Method="Inc";
45 fi
46
47 if [ -z "$prev" ]
48 then
49 echo "-p Parameter Empty";
50 else
51 #Get Latest Folder - Ignore the hacky method, it works.
52 cd /Backups/$Method
53 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
54 IFS='/'
55 read -a strarr <<< "$NewestBackup"
56 Latest_Backup="${strarr[0]}";
57 cd /Backups/
58
59 #Incremental-Backup Content Of Website
60 mkdir -p /Backups/Inc/$date/Web/html
61 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/html/ user@IP:/var/www/html/ /Backups/Inc/$date/Web/html/
62
63 #Incremental-Backup All Config Files NEEDED
64 mkdir -p /Backups/Inc/$date/Web/etc
65 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/etc/ user@IP:/etc/apache2/ /Backups/Inc/$date/Web/etc/
66
67 #Incremental-Backup Fileserver
68 mkdir -p /Backups/Inc/$date/Fileserver
69 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Fileserver/ user@IP:/srv/samba/private/ /Backups/Inc/$date/Fileserver/
70
71 #Backup MongoDB
72 ssh user@IP /usr/bin/mongodump --out /home/DB
73 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
74 ssh user@IP rm -rf /home/DB
75 fi
76}
77
78if [ "$mode" == "full" ]
79then
80 FullBackup;
81elif [ "$mode" == "inc" ]
82then
83 IncrementalBackup;
84fi
85root@Backup:/Backups# du -chs /Backups/Full/2021-11-20/*
8636K /Backups/Full/2021-11-20/DB
876.5M /Backups/Full/2021-11-20/Fileserver
88696K /Backups/Full/2021-11-20/Web
897.2M total
90root@Backup:/Backups# du -chs /Backups/Inc/2021-11-20/*
9136K /Backups/Inc/2021-11-20/DB
926.5M /Backups/Inc/2021-11-20/Fileserver
93696K /Backups/Inc/2021-11-20/Web
947.2M total
95
96root@Backup:/Backups# ls -li /Backups/Full/2021-11-20/
97total 12
981290476 drwxr-xr-x 4 root root 4096 Nov 20 19:26 DB
991290445 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1001290246 drwxr-xr-x 4 root root 4096 Nov 20 19:26 Web
101root@Backup:/Backups# ls -li /Backups/Inc/2021-11-20/
102total 12
1031290506 drwxr-xr-x 4 root root 4096 Nov 20 19:28 DB
1041290496 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1051290486 drwxr-xr-x 4 root root 4096 Nov 20 19:28 Web
106receiving incremental file list
107./
108lol.html
109
110sent 53 bytes received 194 bytes 164.67 bytes/sec
111total size is 606 speedup is 2.45
112receiving incremental file list
113./
114
115sent 33 bytes received 5,468 bytes 11,002.00 bytes/sec
116total size is 93,851 speedup is 17.06
117receiving incremental file list
118./
119
120sent 36 bytes received 1,105 bytes 760.67 bytes/sec
121total size is 6,688,227 speedup is 5,861.72
122*Irrelevant MongoDB Dump Text*
123
124sent 146 bytes received 2,671 bytes 1,878.00 bytes/sec
125total size is 2,163 speedup is 0.77
126mkdir -p /tmp/example/data
127
128dd if=/dev/zero of=/tmp/example/data/zerofile bs=1M count=1
129rsync -av /tmp/example/data/ /tmp/example/full
130rsync -av --link-dest=/tmp/example/full /tmp/example/data/ /tmp/example/incr
131ls -l /tmp/example/*
132-rw-rw-r-- 1 user group 1048576 Nov 21 00:24 /tmp/example/data/zerofile
133-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/full/zerofile
134-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/incr/zerofile
135du -sh /tmp/example/*
1361.0M /tmp/example/data
1371.0M /tmp/example/full
1380 /tmp/example/incr
139
- Oh? There was a 1 MiB file in
/tmp/example/incr
butdu
missed it ?
Actually no. As the file wasn't modified since the previous backup (referenced with --link-dest
), rsync
created a hard-link to it instead of copying its content. — Hard-links connect a same memory space to different files
And du
can detect hard-links and show you the real disk usage, but only when the hard-linked files are included (even in sub-dirs) in its arguments. For example, if you use du -sh
independently for /tmp/example/incr
:
1#!/bin/sh
2while getopts m:p: flags
3do
4 case "$flags" in
5 m) mode=${OPTARG};;
6 p) prev=${OPTARG};;
7 *) echo "usage: $0 [-m] [-p]" >&2
8 exit 1 ;;
9 esac
10done
11
12date="$(date '+%Y-%m-%d')";
13
14
15#Create Folders If They Do Not Exist (-p paramter)
16mkdir -p /Backups/Full && mkdir -p /Backups/Inc
17
18FullBackup() {
19 #Backup Content Of Website
20 mkdir -p /Backups/Full/$date/Web/html
21 rsync -av user@IP:/var/www/html/ /Backups/Full/$date/Web/html/
22
23 #Backup All Config Files NEEDED. Saving Storage Is Key ;)
24 mkdir -p /Backups/Full/$date/Web/etc
25 rsync -av user@IP:/etc/apache2/ /Backups/Full/$date/Web/etc/
26
27 #Backup Fileserver
28 mkdir -p /Backups/Full/$date/Fileserver
29 rsync -av user@IP:/srv/samba/private/ /Backups/Full/$date/Fileserver/
30
31 #Backup MongoDB
32 ssh user@IP /usr/bin/mongodump --out /home/DB
33 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
34 ssh user@IP rm -rf /home/DB
35}
36
37IncrementalBackup(){
38 Method="";
39 if [ "$prev" == "full" ]
40 then
41 Method="Full";
42 elif [ "$prev" == "inc" ]
43 then
44 Method="Inc";
45 fi
46
47 if [ -z "$prev" ]
48 then
49 echo "-p Parameter Empty";
50 else
51 #Get Latest Folder - Ignore the hacky method, it works.
52 cd /Backups/$Method
53 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
54 IFS='/'
55 read -a strarr <<< "$NewestBackup"
56 Latest_Backup="${strarr[0]}";
57 cd /Backups/
58
59 #Incremental-Backup Content Of Website
60 mkdir -p /Backups/Inc/$date/Web/html
61 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/html/ user@IP:/var/www/html/ /Backups/Inc/$date/Web/html/
62
63 #Incremental-Backup All Config Files NEEDED
64 mkdir -p /Backups/Inc/$date/Web/etc
65 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/etc/ user@IP:/etc/apache2/ /Backups/Inc/$date/Web/etc/
66
67 #Incremental-Backup Fileserver
68 mkdir -p /Backups/Inc/$date/Fileserver
69 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Fileserver/ user@IP:/srv/samba/private/ /Backups/Inc/$date/Fileserver/
70
71 #Backup MongoDB
72 ssh user@IP /usr/bin/mongodump --out /home/DB
73 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
74 ssh user@IP rm -rf /home/DB
75 fi
76}
77
78if [ "$mode" == "full" ]
79then
80 FullBackup;
81elif [ "$mode" == "inc" ]
82then
83 IncrementalBackup;
84fi
85root@Backup:/Backups# du -chs /Backups/Full/2021-11-20/*
8636K /Backups/Full/2021-11-20/DB
876.5M /Backups/Full/2021-11-20/Fileserver
88696K /Backups/Full/2021-11-20/Web
897.2M total
90root@Backup:/Backups# du -chs /Backups/Inc/2021-11-20/*
9136K /Backups/Inc/2021-11-20/DB
926.5M /Backups/Inc/2021-11-20/Fileserver
93696K /Backups/Inc/2021-11-20/Web
947.2M total
95
96root@Backup:/Backups# ls -li /Backups/Full/2021-11-20/
97total 12
981290476 drwxr-xr-x 4 root root 4096 Nov 20 19:26 DB
991290445 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1001290246 drwxr-xr-x 4 root root 4096 Nov 20 19:26 Web
101root@Backup:/Backups# ls -li /Backups/Inc/2021-11-20/
102total 12
1031290506 drwxr-xr-x 4 root root 4096 Nov 20 19:28 DB
1041290496 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1051290486 drwxr-xr-x 4 root root 4096 Nov 20 19:28 Web
106receiving incremental file list
107./
108lol.html
109
110sent 53 bytes received 194 bytes 164.67 bytes/sec
111total size is 606 speedup is 2.45
112receiving incremental file list
113./
114
115sent 33 bytes received 5,468 bytes 11,002.00 bytes/sec
116total size is 93,851 speedup is 17.06
117receiving incremental file list
118./
119
120sent 36 bytes received 1,105 bytes 760.67 bytes/sec
121total size is 6,688,227 speedup is 5,861.72
122*Irrelevant MongoDB Dump Text*
123
124sent 146 bytes received 2,671 bytes 1,878.00 bytes/sec
125total size is 2,163 speedup is 0.77
126mkdir -p /tmp/example/data
127
128dd if=/dev/zero of=/tmp/example/data/zerofile bs=1M count=1
129rsync -av /tmp/example/data/ /tmp/example/full
130rsync -av --link-dest=/tmp/example/full /tmp/example/data/ /tmp/example/incr
131ls -l /tmp/example/*
132-rw-rw-r-- 1 user group 1048576 Nov 21 00:24 /tmp/example/data/zerofile
133-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/full/zerofile
134-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/incr/zerofile
135du -sh /tmp/example/*
1361.0M /tmp/example/data
1371.0M /tmp/example/full
1380 /tmp/example/incr
139du -sh /tmp/example/incr
1401.0M /tmp/example/incr
141
- How do you detect that there is hard-links to a file ?
ls -l
actually showed it to us:
1#!/bin/sh
2while getopts m:p: flags
3do
4 case "$flags" in
5 m) mode=${OPTARG};;
6 p) prev=${OPTARG};;
7 *) echo "usage: $0 [-m] [-p]" >&2
8 exit 1 ;;
9 esac
10done
11
12date="$(date '+%Y-%m-%d')";
13
14
15#Create Folders If They Do Not Exist (-p paramter)
16mkdir -p /Backups/Full && mkdir -p /Backups/Inc
17
18FullBackup() {
19 #Backup Content Of Website
20 mkdir -p /Backups/Full/$date/Web/html
21 rsync -av user@IP:/var/www/html/ /Backups/Full/$date/Web/html/
22
23 #Backup All Config Files NEEDED. Saving Storage Is Key ;)
24 mkdir -p /Backups/Full/$date/Web/etc
25 rsync -av user@IP:/etc/apache2/ /Backups/Full/$date/Web/etc/
26
27 #Backup Fileserver
28 mkdir -p /Backups/Full/$date/Fileserver
29 rsync -av user@IP:/srv/samba/private/ /Backups/Full/$date/Fileserver/
30
31 #Backup MongoDB
32 ssh user@IP /usr/bin/mongodump --out /home/DB
33 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
34 ssh user@IP rm -rf /home/DB
35}
36
37IncrementalBackup(){
38 Method="";
39 if [ "$prev" == "full" ]
40 then
41 Method="Full";
42 elif [ "$prev" == "inc" ]
43 then
44 Method="Inc";
45 fi
46
47 if [ -z "$prev" ]
48 then
49 echo "-p Parameter Empty";
50 else
51 #Get Latest Folder - Ignore the hacky method, it works.
52 cd /Backups/$Method
53 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
54 IFS='/'
55 read -a strarr <<< "$NewestBackup"
56 Latest_Backup="${strarr[0]}";
57 cd /Backups/
58
59 #Incremental-Backup Content Of Website
60 mkdir -p /Backups/Inc/$date/Web/html
61 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/html/ user@IP:/var/www/html/ /Backups/Inc/$date/Web/html/
62
63 #Incremental-Backup All Config Files NEEDED
64 mkdir -p /Backups/Inc/$date/Web/etc
65 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/etc/ user@IP:/etc/apache2/ /Backups/Inc/$date/Web/etc/
66
67 #Incremental-Backup Fileserver
68 mkdir -p /Backups/Inc/$date/Fileserver
69 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Fileserver/ user@IP:/srv/samba/private/ /Backups/Inc/$date/Fileserver/
70
71 #Backup MongoDB
72 ssh user@IP /usr/bin/mongodump --out /home/DB
73 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
74 ssh user@IP rm -rf /home/DB
75 fi
76}
77
78if [ "$mode" == "full" ]
79then
80 FullBackup;
81elif [ "$mode" == "inc" ]
82then
83 IncrementalBackup;
84fi
85root@Backup:/Backups# du -chs /Backups/Full/2021-11-20/*
8636K /Backups/Full/2021-11-20/DB
876.5M /Backups/Full/2021-11-20/Fileserver
88696K /Backups/Full/2021-11-20/Web
897.2M total
90root@Backup:/Backups# du -chs /Backups/Inc/2021-11-20/*
9136K /Backups/Inc/2021-11-20/DB
926.5M /Backups/Inc/2021-11-20/Fileserver
93696K /Backups/Inc/2021-11-20/Web
947.2M total
95
96root@Backup:/Backups# ls -li /Backups/Full/2021-11-20/
97total 12
981290476 drwxr-xr-x 4 root root 4096 Nov 20 19:26 DB
991290445 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1001290246 drwxr-xr-x 4 root root 4096 Nov 20 19:26 Web
101root@Backup:/Backups# ls -li /Backups/Inc/2021-11-20/
102total 12
1031290506 drwxr-xr-x 4 root root 4096 Nov 20 19:28 DB
1041290496 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1051290486 drwxr-xr-x 4 root root 4096 Nov 20 19:28 Web
106receiving incremental file list
107./
108lol.html
109
110sent 53 bytes received 194 bytes 164.67 bytes/sec
111total size is 606 speedup is 2.45
112receiving incremental file list
113./
114
115sent 33 bytes received 5,468 bytes 11,002.00 bytes/sec
116total size is 93,851 speedup is 17.06
117receiving incremental file list
118./
119
120sent 36 bytes received 1,105 bytes 760.67 bytes/sec
121total size is 6,688,227 speedup is 5,861.72
122*Irrelevant MongoDB Dump Text*
123
124sent 146 bytes received 2,671 bytes 1,878.00 bytes/sec
125total size is 2,163 speedup is 0.77
126mkdir -p /tmp/example/data
127
128dd if=/dev/zero of=/tmp/example/data/zerofile bs=1M count=1
129rsync -av /tmp/example/data/ /tmp/example/full
130rsync -av --link-dest=/tmp/example/full /tmp/example/data/ /tmp/example/incr
131ls -l /tmp/example/*
132-rw-rw-r-- 1 user group 1048576 Nov 21 00:24 /tmp/example/data/zerofile
133-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/full/zerofile
134-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/incr/zerofile
135du -sh /tmp/example/*
1361.0M /tmp/example/data
1371.0M /tmp/example/full
1380 /tmp/example/incr
139du -sh /tmp/example/incr
1401.0M /tmp/example/incr
141-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/full/zerofile
142 ^
143 HERE
144
This number means that there are two existing hard-links to the file: this file itself and another one in the same filesystem.
about your code
It doesn't change anything but I would replace:
1#!/bin/sh
2while getopts m:p: flags
3do
4 case "$flags" in
5 m) mode=${OPTARG};;
6 p) prev=${OPTARG};;
7 *) echo "usage: $0 [-m] [-p]" >&2
8 exit 1 ;;
9 esac
10done
11
12date="$(date '+%Y-%m-%d')";
13
14
15#Create Folders If They Do Not Exist (-p paramter)
16mkdir -p /Backups/Full && mkdir -p /Backups/Inc
17
18FullBackup() {
19 #Backup Content Of Website
20 mkdir -p /Backups/Full/$date/Web/html
21 rsync -av user@IP:/var/www/html/ /Backups/Full/$date/Web/html/
22
23 #Backup All Config Files NEEDED. Saving Storage Is Key ;)
24 mkdir -p /Backups/Full/$date/Web/etc
25 rsync -av user@IP:/etc/apache2/ /Backups/Full/$date/Web/etc/
26
27 #Backup Fileserver
28 mkdir -p /Backups/Full/$date/Fileserver
29 rsync -av user@IP:/srv/samba/private/ /Backups/Full/$date/Fileserver/
30
31 #Backup MongoDB
32 ssh user@IP /usr/bin/mongodump --out /home/DB
33 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
34 ssh user@IP rm -rf /home/DB
35}
36
37IncrementalBackup(){
38 Method="";
39 if [ "$prev" == "full" ]
40 then
41 Method="Full";
42 elif [ "$prev" == "inc" ]
43 then
44 Method="Inc";
45 fi
46
47 if [ -z "$prev" ]
48 then
49 echo "-p Parameter Empty";
50 else
51 #Get Latest Folder - Ignore the hacky method, it works.
52 cd /Backups/$Method
53 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
54 IFS='/'
55 read -a strarr <<< "$NewestBackup"
56 Latest_Backup="${strarr[0]}";
57 cd /Backups/
58
59 #Incremental-Backup Content Of Website
60 mkdir -p /Backups/Inc/$date/Web/html
61 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/html/ user@IP:/var/www/html/ /Backups/Inc/$date/Web/html/
62
63 #Incremental-Backup All Config Files NEEDED
64 mkdir -p /Backups/Inc/$date/Web/etc
65 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/etc/ user@IP:/etc/apache2/ /Backups/Inc/$date/Web/etc/
66
67 #Incremental-Backup Fileserver
68 mkdir -p /Backups/Inc/$date/Fileserver
69 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Fileserver/ user@IP:/srv/samba/private/ /Backups/Inc/$date/Fileserver/
70
71 #Backup MongoDB
72 ssh user@IP /usr/bin/mongodump --out /home/DB
73 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
74 ssh user@IP rm -rf /home/DB
75 fi
76}
77
78if [ "$mode" == "full" ]
79then
80 FullBackup;
81elif [ "$mode" == "inc" ]
82then
83 IncrementalBackup;
84fi
85root@Backup:/Backups# du -chs /Backups/Full/2021-11-20/*
8636K /Backups/Full/2021-11-20/DB
876.5M /Backups/Full/2021-11-20/Fileserver
88696K /Backups/Full/2021-11-20/Web
897.2M total
90root@Backup:/Backups# du -chs /Backups/Inc/2021-11-20/*
9136K /Backups/Inc/2021-11-20/DB
926.5M /Backups/Inc/2021-11-20/Fileserver
93696K /Backups/Inc/2021-11-20/Web
947.2M total
95
96root@Backup:/Backups# ls -li /Backups/Full/2021-11-20/
97total 12
981290476 drwxr-xr-x 4 root root 4096 Nov 20 19:26 DB
991290445 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1001290246 drwxr-xr-x 4 root root 4096 Nov 20 19:26 Web
101root@Backup:/Backups# ls -li /Backups/Inc/2021-11-20/
102total 12
1031290506 drwxr-xr-x 4 root root 4096 Nov 20 19:28 DB
1041290496 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1051290486 drwxr-xr-x 4 root root 4096 Nov 20 19:28 Web
106receiving incremental file list
107./
108lol.html
109
110sent 53 bytes received 194 bytes 164.67 bytes/sec
111total size is 606 speedup is 2.45
112receiving incremental file list
113./
114
115sent 33 bytes received 5,468 bytes 11,002.00 bytes/sec
116total size is 93,851 speedup is 17.06
117receiving incremental file list
118./
119
120sent 36 bytes received 1,105 bytes 760.67 bytes/sec
121total size is 6,688,227 speedup is 5,861.72
122*Irrelevant MongoDB Dump Text*
123
124sent 146 bytes received 2,671 bytes 1,878.00 bytes/sec
125total size is 2,163 speedup is 0.77
126mkdir -p /tmp/example/data
127
128dd if=/dev/zero of=/tmp/example/data/zerofile bs=1M count=1
129rsync -av /tmp/example/data/ /tmp/example/full
130rsync -av --link-dest=/tmp/example/full /tmp/example/data/ /tmp/example/incr
131ls -l /tmp/example/*
132-rw-rw-r-- 1 user group 1048576 Nov 21 00:24 /tmp/example/data/zerofile
133-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/full/zerofile
134-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/incr/zerofile
135du -sh /tmp/example/*
1361.0M /tmp/example/data
1371.0M /tmp/example/full
1380 /tmp/example/incr
139du -sh /tmp/example/incr
1401.0M /tmp/example/incr
141-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/full/zerofile
142 ^
143 HERE
144 #Get Latest Folder - Ignore the hacky method, it works.
145 cd /Backups/$Method
146 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
147 IFS='/'
148 read -a strarr <<< "$NewestBackup"
149 Latest_Backup="${strarr[0]}";
150 cd /Backups/
151
with:
1#!/bin/sh
2while getopts m:p: flags
3do
4 case "$flags" in
5 m) mode=${OPTARG};;
6 p) prev=${OPTARG};;
7 *) echo "usage: $0 [-m] [-p]" >&2
8 exit 1 ;;
9 esac
10done
11
12date="$(date '+%Y-%m-%d')";
13
14
15#Create Folders If They Do Not Exist (-p paramter)
16mkdir -p /Backups/Full && mkdir -p /Backups/Inc
17
18FullBackup() {
19 #Backup Content Of Website
20 mkdir -p /Backups/Full/$date/Web/html
21 rsync -av user@IP:/var/www/html/ /Backups/Full/$date/Web/html/
22
23 #Backup All Config Files NEEDED. Saving Storage Is Key ;)
24 mkdir -p /Backups/Full/$date/Web/etc
25 rsync -av user@IP:/etc/apache2/ /Backups/Full/$date/Web/etc/
26
27 #Backup Fileserver
28 mkdir -p /Backups/Full/$date/Fileserver
29 rsync -av user@IP:/srv/samba/private/ /Backups/Full/$date/Fileserver/
30
31 #Backup MongoDB
32 ssh user@IP /usr/bin/mongodump --out /home/DB
33 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
34 ssh user@IP rm -rf /home/DB
35}
36
37IncrementalBackup(){
38 Method="";
39 if [ "$prev" == "full" ]
40 then
41 Method="Full";
42 elif [ "$prev" == "inc" ]
43 then
44 Method="Inc";
45 fi
46
47 if [ -z "$prev" ]
48 then
49 echo "-p Parameter Empty";
50 else
51 #Get Latest Folder - Ignore the hacky method, it works.
52 cd /Backups/$Method
53 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
54 IFS='/'
55 read -a strarr <<< "$NewestBackup"
56 Latest_Backup="${strarr[0]}";
57 cd /Backups/
58
59 #Incremental-Backup Content Of Website
60 mkdir -p /Backups/Inc/$date/Web/html
61 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/html/ user@IP:/var/www/html/ /Backups/Inc/$date/Web/html/
62
63 #Incremental-Backup All Config Files NEEDED
64 mkdir -p /Backups/Inc/$date/Web/etc
65 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Web/etc/ user@IP:/etc/apache2/ /Backups/Inc/$date/Web/etc/
66
67 #Incremental-Backup Fileserver
68 mkdir -p /Backups/Inc/$date/Fileserver
69 rsync -av --link-dest /Backups/$Method/"$Latest_Backup"/Fileserver/ user@IP:/srv/samba/private/ /Backups/Inc/$date/Fileserver/
70
71 #Backup MongoDB
72 ssh user@IP /usr/bin/mongodump --out /home/DB
73 rsync -av root@BackupServerIP:/home/DB/ /Backups/Full/$date/DB
74 ssh user@IP rm -rf /home/DB
75 fi
76}
77
78if [ "$mode" == "full" ]
79then
80 FullBackup;
81elif [ "$mode" == "inc" ]
82then
83 IncrementalBackup;
84fi
85root@Backup:/Backups# du -chs /Backups/Full/2021-11-20/*
8636K /Backups/Full/2021-11-20/DB
876.5M /Backups/Full/2021-11-20/Fileserver
88696K /Backups/Full/2021-11-20/Web
897.2M total
90root@Backup:/Backups# du -chs /Backups/Inc/2021-11-20/*
9136K /Backups/Inc/2021-11-20/DB
926.5M /Backups/Inc/2021-11-20/Fileserver
93696K /Backups/Inc/2021-11-20/Web
947.2M total
95
96root@Backup:/Backups# ls -li /Backups/Full/2021-11-20/
97total 12
981290476 drwxr-xr-x 4 root root 4096 Nov 20 19:26 DB
991290445 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1001290246 drwxr-xr-x 4 root root 4096 Nov 20 19:26 Web
101root@Backup:/Backups# ls -li /Backups/Inc/2021-11-20/
102total 12
1031290506 drwxr-xr-x 4 root root 4096 Nov 20 19:28 DB
1041290496 drwxrwxr-x 6 root root 4096 Nov 20 18:54 Fileserver
1051290486 drwxr-xr-x 4 root root 4096 Nov 20 19:28 Web
106receiving incremental file list
107./
108lol.html
109
110sent 53 bytes received 194 bytes 164.67 bytes/sec
111total size is 606 speedup is 2.45
112receiving incremental file list
113./
114
115sent 33 bytes received 5,468 bytes 11,002.00 bytes/sec
116total size is 93,851 speedup is 17.06
117receiving incremental file list
118./
119
120sent 36 bytes received 1,105 bytes 760.67 bytes/sec
121total size is 6,688,227 speedup is 5,861.72
122*Irrelevant MongoDB Dump Text*
123
124sent 146 bytes received 2,671 bytes 1,878.00 bytes/sec
125total size is 2,163 speedup is 0.77
126mkdir -p /tmp/example/data
127
128dd if=/dev/zero of=/tmp/example/data/zerofile bs=1M count=1
129rsync -av /tmp/example/data/ /tmp/example/full
130rsync -av --link-dest=/tmp/example/full /tmp/example/data/ /tmp/example/incr
131ls -l /tmp/example/*
132-rw-rw-r-- 1 user group 1048576 Nov 21 00:24 /tmp/example/data/zerofile
133-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/full/zerofile
134-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/incr/zerofile
135du -sh /tmp/example/*
1361.0M /tmp/example/data
1371.0M /tmp/example/full
1380 /tmp/example/incr
139du -sh /tmp/example/incr
1401.0M /tmp/example/incr
141-rw-rw-r-- 2 user group 1048576 Nov 21 00:24 /tmp/example/full/zerofile
142 ^
143 HERE
144 #Get Latest Folder - Ignore the hacky method, it works.
145 cd /Backups/$Method
146 NewestBackup=$(find . ! -path . -type d | sort -nr | head -1 | sed s@^./@@)
147 IFS='/'
148 read -a strarr <<< "$NewestBackup"
149 Latest_Backup="${strarr[0]}";
150 cd /Backups/
151 #Get Latest Folder
152 glob='20[0-9][0-9]-[0-1][0-9]-[0-3][0-9]' # match a timestamp (more or less)
153 NewestBackup=$(compgen -G "/Backups/$Method/$glob/" | sort -nr | head -n 1)
154
glob
makes sure that the directories/files found bycompgen -G
will have the right format.- Adding
/
at the end of a glob makes sure that it matches directories only.
QUESTION
btrfs send / receive on incremental folders that rotate
Asked 2021-Sep-28 at 06:36I am using incremental backups using rsnapshot
combined with a custom
cmd_cp
and cmd_rm
to make use of btrfs snapshots, this procudes multiple
daily btrfs subvolumes:
1.sync
2daily.0
3daily.1
4daily.2
5
.sync
is the folder that gets synced to using SSH from the server I backup.
When completed this happens:
1.sync
2daily.0
3daily.1
4daily.2
5mv daily.2/ daily.3/
6mv daily.1/ daily.2/
7mv daily.0/ daily.1/
8rsnapshot_cp_btrfs -al .sync daily.0
9
The cp command translates into btrfs subvolume snapshot -r .sync daily.0
.
This all works great. But now I want to sync all backups to an other server too, so I have a full mirror of all backups. This sync should always work, even if it would be synced a week after (maybe due SSH connection issues).
Normally syncing would be easy using btrfs send and receive using parent snapshots as described on the wiki: https://btrfs.wiki.kernel.org/index.php/Incremental_Backup#Doing_it_by_hand.2C_step_by_step
I imagined a loop that just sends all daily folders and maintaining the old backup for parent reference.
But in this case daily.0
moves to daily.1
, 1 to 2, and so on. So this would not work.
I could send a simple mv
to the remote server, but I can't trust this since
on any errors, the folder structures would not be proper a day later. I want a true mirror, but making use of the btrfs tools.
Did anyone managed a similar situation or know the best way to clone all subvolumes to the other server?
Big thanks!
ANSWER
Answered 2021-Sep-28 at 06:36I solved it! I created a bash script that syncs all snapshots with a date in the name to the remote server. The date is subtracted from btrfs subvolume show
.
So daily.0
can become 2021-09-20-08-44-46
on the remote.
I sync backwards. daily.30
first. daily.0
last. This way I can pass the
proper parent to btrfs send
. E.g.: btrfs send -p daily.30 daily.29
.
If the date named snapshot exists on the remote, I check with btrfs subvolume show
whether it was properly synced. If not, I delete the remote subvolume/snapshot and
re-sync. If it was already synced, I will skip the sync. A proper synced subvolume/snapshot has a Received UUID
and readonly
flag.
After syncing I compare all snapshot names of the remote to what was just synced. The difference will deleted (thus old snapshots).
I might share the code in the future when it's all been stable for a long run. For now I hope the above information will help others!
QUESTION
How to copy(backup) Azure CosmosDB container to Azure blob storage?
Asked 2021-Aug-24 at 15:47There are many Containers in my CosmosDB Database, And I need to backup some but not all containers everyday. Some container are backuped for 7days, some are 15 days.
- I don't want to use incremental backup, because we just backup once everyday.
- Maybe we store backup dataset into Azure blob storage
The thing I don't know => container == collection
. Azure document is so confusing!
ANSWER
Answered 2021-Aug-23 at 11:54Probably you can create a job on Azure Data Factory (aka ADF, https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db), use the ADF job to copy data from these containers, and save the data as files (one file for one container) in somewhere (like Azure Blob Storage).
QUESTION
Why am I getting "Agent error" when trying to perform a backup?
Asked 2021-Jul-31 at 22:03I am implementing a Backup system for my Android app. I'm using a custom BackupAgentHelper to back up the shared preferences and a database file:
1class CustomAgent : BackupAgentHelper() {
2
3 val DB_NAME = "notesDB"
4
5 val DB_BACKUP_KEY = "dbBackup"
6 val SHARED_PREFS_KEY = "prefsBackup"
7
8 override fun onCreate() {
9 super.onCreate()
10 //Backup note's database file
11 val dbHelper = FileBackupHelper(this, DB_NAME)
12 addHelper(DB_BACKUP_KEY, dbHelper)
13
14 //Backup SharedPreferences
15 val prefHelper = SharedPreferencesBackupHelper(this)
16 addHelper(SHARED_PREFS_KEY, prefHelper)
17 }
18
19 override fun onBackup(
20 oldState: ParcelFileDescriptor?,
21 data: BackupDataOutput?,
22 newState: ParcelFileDescriptor?
23 ) {
24 super.onBackup(oldState, data, newState)
25 Toast.makeText(this, "Performing backup...", Toast.LENGTH_SHORT).show()
26 }
27
28 override fun onRestore(
29 data: BackupDataInput?,
30 appVersionCode: Int,
31 newState: ParcelFileDescriptor?
32 ) {
33 Toast.makeText(this, "Restoring...", Toast.LENGTH_SHORT).show()
34 super.onRestore(data, appVersionCode, newState)
35 }
36
37 override fun onQuotaExceeded(backupDataBytes: Long, quotaBytes: Long) {
38 Toast.makeText(this, "Failed backup: The size is too big.", Toast.LENGTH_SHORT).show()
39 super.onQuotaExceeded(backupDataBytes, quotaBytes)
40 }
41
42 override fun getFilesDir(): File {
43 val path = getDatabasePath(DB_NAME)
44 return path.parentFile
45 }
46}
47
I wanted to show a Toast message to let the user know that a backup is running, and also to inform them if they are any issues. However, if I run the app with the Toast messages, and execute the following adb command to request a backup...
adb -s emulator-5554 shell bmgr backupnow com.byteseb.grafobook
I am getting this output:
Running incremental backup for 1 requested packages.
Package @pm@ with result: Success
Package com.byteseb.grafobook with result: Agent error
Backup finished with result: Success
And the Toast message is not shown.
But if I remove the Toast message lines or replace them with a println() function and execute the same command, I am getting this output:
Running incremental backup for 1 requested packages.
Package @pm@ with result: Success
Package com.byteseb.grafobook with result: Success
Backup finished with result: Success
Why is this error happening? And if I am not allowed to show a Toast, what else can I do to inform the user about backups?
ANSWER
Answered 2021-Jul-31 at 22:03As @MikeM said, this was happening because I was executing the code from a non UI thread.
I solved it by using a Handler, this one takes care of running the code from the UI thread:
1class CustomAgent : BackupAgentHelper() {
2
3 val DB_NAME = "notesDB"
4
5 val DB_BACKUP_KEY = "dbBackup"
6 val SHARED_PREFS_KEY = "prefsBackup"
7
8 override fun onCreate() {
9 super.onCreate()
10 //Backup note's database file
11 val dbHelper = FileBackupHelper(this, DB_NAME)
12 addHelper(DB_BACKUP_KEY, dbHelper)
13
14 //Backup SharedPreferences
15 val prefHelper = SharedPreferencesBackupHelper(this)
16 addHelper(SHARED_PREFS_KEY, prefHelper)
17 }
18
19 override fun onBackup(
20 oldState: ParcelFileDescriptor?,
21 data: BackupDataOutput?,
22 newState: ParcelFileDescriptor?
23 ) {
24 super.onBackup(oldState, data, newState)
25 Toast.makeText(this, "Performing backup...", Toast.LENGTH_SHORT).show()
26 }
27
28 override fun onRestore(
29 data: BackupDataInput?,
30 appVersionCode: Int,
31 newState: ParcelFileDescriptor?
32 ) {
33 Toast.makeText(this, "Restoring...", Toast.LENGTH_SHORT).show()
34 super.onRestore(data, appVersionCode, newState)
35 }
36
37 override fun onQuotaExceeded(backupDataBytes: Long, quotaBytes: Long) {
38 Toast.makeText(this, "Failed backup: The size is too big.", Toast.LENGTH_SHORT).show()
39 super.onQuotaExceeded(backupDataBytes, quotaBytes)
40 }
41
42 override fun getFilesDir(): File {
43 val path = getDatabasePath(DB_NAME)
44 return path.parentFile
45 }
46}
47override fun onBackup(
48 oldState: ParcelFileDescriptor?,
49 data: BackupDataOutput?,
50 newState: ParcelFileDescriptor?
51) {
52 super.onBackup(oldState, data, newState)
53 val handler = Handler(Looper.getMainLooper())
54 val runnable = Runnable {
55 Toast.makeText(this, "Performing backup...", Toast.LENGTH_SHORT).show()
56 }
57 handler.post(runnable)
58}
59
QUESTION
Difference between incremental backup and WAL archiving with PgBackRest
Asked 2021-Jul-09 at 10:34As far as I understood
- WAL archiving is pushing the WAL logs to a storage place as the WAL files are generated
- Incremental backup is pushing all the WAL files created since the last backup
So, assuming my WAL archiving is setup correctly
- Why would I need incremental backups?
- Shouldn't the cost of incremental backups be almost zero?
Most of the documentation I found is focusing on a high level implementation (e.g. how to setup WAL archiving or incremental backups) vs the internal ( what happens when I trigger an incremental backup)
My question can probably be solved with a link to some documentation, but my google-fu has failed me so far
ANSWER
Answered 2021-Jul-09 at 10:34Backups are not copies of the WAL files, they're copies of the cluster's whole data directory. As it says in the docs, an incremental backup contains:
those database cluster files that have changed since the last backup (which can be another incremental backup, a differential backup, or a full backup)
WALs alone aren't enough to restore a database; they only record changes to the cluster files, so they require a backup as a starting point.
The need for periodic backups (incremental or otherwise) is primarily to do with recovery time. Technically, you could just hold on to your original full backup plus years worth of WAL files, but replaying them all in the event of a failure could take hours or days, and you likely can't tolerate that kind of downtime.
A new backup also means that you can safely discard any older WALs (assuming you don't still need them for point-in-time recovery), meaning less data to store, and less data whose integrity you're relying on in order to recover.
If you want to know more about what pgBackRest is actually doing under the hood, it's all covered pretty thoroughly in the Postgres docs.
QUESTION
Can we configure Marklogic database backup on S3 bucket
Asked 2021-Apr-01 at 12:33I need to configure Marklogic Full/Incremental backup in the S3 bucket is it possible? Can anyone share the documents/steps to configure?
Thanks!
ANSWER
Answered 2021-Apr-01 at 12:33Yes, you can backup to S3.
You will need to configure the S3 credentials, so that MarkLogic is able to use S3 and read/write objects to your S3 bucket.
MarkLogic can't use S3 for journal archive paths, because S3 does not support file append operations. So if you want to enable journal archives, you will need to specify a custom path for that when creating your backups.
Backing Up a DatabaseS3 StorageThe directory you specified can be an operating system mounted directory path, it can be an HDFS path, or it can be an S3 path. For details on using HDFS and S3 storage in MarkLogic, see Disk Storage Considerations in the Query Performance and Tuning Guide.
S3 requires authentication with the following S3 credentials:
- AWS Access Key
- AWS Secret Key
The S3 credentials for a MarkLogic cluster are stored in the security database for the cluster. You can only have one set of S3 credentials per cluster. You can set up security access in S3, you can access any paths that are allowed access by those credentials. Because of the flexibility of how you can set up access in S3, you can set up any S3 account to allow access to any other account, so if you want to allow the credentials you have set up in MarkLogic to access S3 paths owned by other S3 users, those users need to grant access to those paths to the AWS Access Key set up in your MarkLogic Cluster.
To set up the AW credentials for a cluster, enter the keys in the Admin Interface under Security > Credentials. You can also set up the keys programmatically using the following Security API functions:
- sec:credentials-get-aws
- sec:credentials-set-aws
The credentials are stored in the Security database. Therefore, you cannot use S3 as the forest storage for a security database.
if you want to have Journaling enabled, you will need to have them written to a different location. Journal archiving is not supported on S3.
The default location for Journals are in the backup, but when creating programmatically you can specify a different $journal-archive-path
.
Storage on S3 has an 'eventual consistency' property, meaning that write operations might not be available immediately for reading, but they will be available at some point. Because of this, S3 data directories in MarkLogic have a restriction that MarkLogic does not create Journals on S3. Therefore, MarkLogic recommends that you use S3 only for backups and for read-only forests, otherwise you risk the possibility of data loss. If your forests are read-only, then there is no need to have journals.
QUESTION
Alternatives to Select-String
Asked 2021-Mar-21 at 14:49I'm looking for an alternative to Select-String
to use within my function. At present, everything else in the function returns the data I need except this command:
1Get-Content "$((Get-Location).Drive.Name):\Test\file.log" | Select-Object -Last 100 | Select-String -Pattern "SCHEDULEREC STATUS BEGIN" -Context 0,24
2
On it's own the command works fine, albeit slowly. This is due in part to the fact that the file it reads using Get-Content is sometimes 4MB to 700MB in size. However, I also found that even if I manage to cut the unnecessary information out of the file and reduce it down to only (for example) 5KB, when used in the function the Select-String
will only get the "SCHEDULEREC STATUS BEGIN" headline and not collect the rest of the lines to 24 despite working fine separately (also the rest of the commands following this one in the function still work fine regardless).
The environment consists of Windows Server 2003 to 2019, using Powershell v1+, as such any commands used would need to work across all versions of Powershell. The function's purpose is to gather TSM (Tivoli Storage Manager) information, without the environment it may be difficult to reproduce the issue I'm encountering.
Full function:
1Get-Content "$((Get-Location).Drive.Name):\Test\file.log" | Select-Object -Last 100 | Select-String -Pattern "SCHEDULEREC STATUS BEGIN" -Context 0,24
2function Start-WindowsTSMBackup {
3 <#
4 .Synopsis
5 Standard backup process for TSM based clients on Windows
6
7 .Description
8 This script helps in going through the required tasks of initial troubleshooting remotely on a remote computer.
9
10 .Parameter ComputerName
11 Server name for which you want to run the TSM process on; you will be prompted to input
12
13 .Parameter Output
14 Change the location of "$Output" to where you want the success/failure texts to go
15
16 .Parameter Credential
17 Allows usage over domain/local networks that require password authentication
18
19 .Notes
20 NAME: Start-WindowsTSMBackup.ps1
21 AUTHOR: mm079
22
23 #>
24
25 [cmdletbinding()]
26 param(
27 [ValidateScript({Test-Connection -ComputerName $_ -Quiet -Count 1})]
28 [parameter(Mandatory=$true)] # Will request the input of the computername
29 [string]$ComputerName,
30
31 # Output directory for the results
32 [string]$Output = "C:\Temp\$ComputerName Results.txt",
33
34 <#
35 # Use this to submit credentials automatically
36 [string]$Username = "domain\username",
37 [string]$Password = "Pa$$w0rd",
38 [securestring]$SecurePassword = ( $Password | ConvertTo-SecureString -AsPlainText -Force),
39 [pscredential]$Credential = (New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $Username,$SecurePassword),
40 #>
41
42 # Use this to submit credentials by requested input
43 [System.Management.Automation.PSCredential]$Credential = (Get-Credential)
44 ) # Param
45
46 BEGIN {
47
48 } # BEGIN
49
50 PROCESS {
51
52 foreach ($Computer in $ComputerName) {
53
54 if (Test-Connection -ComputerName $Computer -Count 1 -ea 0) {
55 try {
56 $Results = invoke-command -script {
57
58 #Obtain verbose information for testing
59 VerbosePreference = "continue"
60
61 #Query the server for basic information - Hostname and Current time
62 Write-Output $env:COMPUTERNAME
63 Write-Output ('Current Data/Time: ' + (Get-Date) )
64
65 # Retrieve the last boot time
66 SystemInfo | find /i "Boot Time"
67
68 # TSM Service Last Restart Time
69 Write-Output ('TSM Service Last Restart Time: ' + (Get-EventLog -LogName "System" -Source "Service Control Manager" -EntryType "Information" -Message "*TSM Scheduler Service*running*" -Newest 1).TimeGenerated )
70 Write-Verbose "Basic Information processed"
71
72
73 if (Test-Path "$((Get-Location).Drive.Name):\Program Files\Tivoli\TSM\baclient") {
74
75 # Obtains DSMSched information
76 Get-Content "$((Get-Location).Drive.Name):\Program Files\Tivoli\TSM\baclient\dsmsched.log" | Select-Object -Last 100 | Select-String -Pattern "SCHEDULEREC STATUS BEGIN" -Context 0,24
77 Write-Verbose "DSMSched Information processed"
78
79 # Obtains DSMErrors
80 Get-Content "$((Get-Location).Drive.Name):\Program Files\Tivoli\TSM\baclient\dsmerror.log" | Select-Object -Last 200 | Select-String -Pattern "Error", "failure"
81 Write-Verbose "DSMError Information processed"
82
83 } # if
84 else {
85
86 Write-Output "File path to TSM logs not found"
87 Write-Verbose "Unable to process TSM information due to invalid path - not found"
88 } # else
89
90 ## Restart the "TSM Scheduler Service"
91 if (Get-Service -Name "TSM*" -ErrorAction SilentlyContinue)
92 {
93 Stop-Service -Name "TSM*" -PassThru
94 Start-Service -Name "TSM*" -PassThru
95 Write-Verbose "TSM Service processed"
96 }
97 else
98 {
99 Write-Output "Could not restart TSM Service"
100 Write-Verbose "TSM Service restart failed"
101 }
102
103 } # invoke-command
104 -ComputerName $Computer -Credential $Credential -ErrorAction Stop
105 } # try
106 catch {
107
108 Write-Error "$($_.Exception.Message) - Line Number: $($_.InvocationInfo.ScriptLineNumber)"
109
110 } # catch
111 } # if
112
113 } # foreach
114
115
116
117 } # PROCESS
118 END {
119
120 } # END
121
122
123} # Function Start-WindowsTSMBackup
124
Expected output
1Get-Content "$((Get-Location).Drive.Name):\Test\file.log" | Select-Object -Last 100 | Select-String -Pattern "SCHEDULEREC STATUS BEGIN" -Context 0,24
2function Start-WindowsTSMBackup {
3 <#
4 .Synopsis
5 Standard backup process for TSM based clients on Windows
6
7 .Description
8 This script helps in going through the required tasks of initial troubleshooting remotely on a remote computer.
9
10 .Parameter ComputerName
11 Server name for which you want to run the TSM process on; you will be prompted to input
12
13 .Parameter Output
14 Change the location of "$Output" to where you want the success/failure texts to go
15
16 .Parameter Credential
17 Allows usage over domain/local networks that require password authentication
18
19 .Notes
20 NAME: Start-WindowsTSMBackup.ps1
21 AUTHOR: mm079
22
23 #>
24
25 [cmdletbinding()]
26 param(
27 [ValidateScript({Test-Connection -ComputerName $_ -Quiet -Count 1})]
28 [parameter(Mandatory=$true)] # Will request the input of the computername
29 [string]$ComputerName,
30
31 # Output directory for the results
32 [string]$Output = "C:\Temp\$ComputerName Results.txt",
33
34 <#
35 # Use this to submit credentials automatically
36 [string]$Username = "domain\username",
37 [string]$Password = "Pa$$w0rd",
38 [securestring]$SecurePassword = ( $Password | ConvertTo-SecureString -AsPlainText -Force),
39 [pscredential]$Credential = (New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $Username,$SecurePassword),
40 #>
41
42 # Use this to submit credentials by requested input
43 [System.Management.Automation.PSCredential]$Credential = (Get-Credential)
44 ) # Param
45
46 BEGIN {
47
48 } # BEGIN
49
50 PROCESS {
51
52 foreach ($Computer in $ComputerName) {
53
54 if (Test-Connection -ComputerName $Computer -Count 1 -ea 0) {
55 try {
56 $Results = invoke-command -script {
57
58 #Obtain verbose information for testing
59 VerbosePreference = "continue"
60
61 #Query the server for basic information - Hostname and Current time
62 Write-Output $env:COMPUTERNAME
63 Write-Output ('Current Data/Time: ' + (Get-Date) )
64
65 # Retrieve the last boot time
66 SystemInfo | find /i "Boot Time"
67
68 # TSM Service Last Restart Time
69 Write-Output ('TSM Service Last Restart Time: ' + (Get-EventLog -LogName "System" -Source "Service Control Manager" -EntryType "Information" -Message "*TSM Scheduler Service*running*" -Newest 1).TimeGenerated )
70 Write-Verbose "Basic Information processed"
71
72
73 if (Test-Path "$((Get-Location).Drive.Name):\Program Files\Tivoli\TSM\baclient") {
74
75 # Obtains DSMSched information
76 Get-Content "$((Get-Location).Drive.Name):\Program Files\Tivoli\TSM\baclient\dsmsched.log" | Select-Object -Last 100 | Select-String -Pattern "SCHEDULEREC STATUS BEGIN" -Context 0,24
77 Write-Verbose "DSMSched Information processed"
78
79 # Obtains DSMErrors
80 Get-Content "$((Get-Location).Drive.Name):\Program Files\Tivoli\TSM\baclient\dsmerror.log" | Select-Object -Last 200 | Select-String -Pattern "Error", "failure"
81 Write-Verbose "DSMError Information processed"
82
83 } # if
84 else {
85
86 Write-Output "File path to TSM logs not found"
87 Write-Verbose "Unable to process TSM information due to invalid path - not found"
88 } # else
89
90 ## Restart the "TSM Scheduler Service"
91 if (Get-Service -Name "TSM*" -ErrorAction SilentlyContinue)
92 {
93 Stop-Service -Name "TSM*" -PassThru
94 Start-Service -Name "TSM*" -PassThru
95 Write-Verbose "TSM Service processed"
96 }
97 else
98 {
99 Write-Output "Could not restart TSM Service"
100 Write-Verbose "TSM Service restart failed"
101 }
102
103 } # invoke-command
104 -ComputerName $Computer -Credential $Credential -ErrorAction Stop
105 } # try
106 catch {
107
108 Write-Error "$($_.Exception.Message) - Line Number: $($_.InvocationInfo.ScriptLineNumber)"
109
110 } # catch
111 } # if
112
113 } # foreach
114
115
116
117 } # PROCESS
118 END {
119
120 } # END
121
122
123} # Function Start-WindowsTSMBackup
124EXAMPLESERVERNAME
125Current Data/Time: 02/02/2021 14:27:00
126System Boot Time: 01/15/2021 12:00:00 AM
127TSM Service Last Restart Time: 02/01/2021 16:27:21
128
12902/01/2021 21:00:00 --- SCHEDULEREC STATUS BEGIN
13002/01/2021 21:00:00 Total number of objects inspected: ####
13102/01/2021 21:00:00 Total number of objects backed up: ####
13202/01/2021 21:00:00 Total number of objects updated: ####
13302/01/2021 21:00:00 Total number of objects rebound: ####
13402/01/2021 21:00:00 Total number of objects deleted: ####
13502/01/2021 21:00:00 Total number of objects expired: ####
13602/01/2021 21:00:00 Total number of objects failed: ####
13702/01/2021 21:00:00 Total number of objects encrypted: ####
13802/01/2021 21:00:00 Total number of subfile objects: ####
13902/01/2021 21:00:00 Total number of objects grew: ####
14002/01/2021 21:00:00 Total number of retries: ####
14102/01/2021 21:00:00 Total number of bytes inspected: ####
14202/01/2021 21:00:00 Total number of bytes transferred: ####
14302/01/2021 21:00:00 Data transfer time: ####
14402/01/2021 21:00:00 Network data transfer rate: ####
14502/01/2021 21:00:00 Aggregate data transfer rate: ####
14602/01/2021 21:00:00 Objects compressed by: ####
14702/01/2021 21:00:00 Total data reduction ratio: ####
14802/01/2021 21:00:00 Subfile objects reduced by: ####
14902/01/2021 21:00:00 Elapsed processing time: ####
15002/01/2021 21:00:00 --- SCHEDULEREC STATUS END
15102/01/2021 21:00:00 --- SCHEDULEREC OBJECT END BACKUP 02/01/2021 21:00:00
15202/01/2021 21:00:00 Scheduled event 'BACKUP' completed successfully.
15302/01/2021 21:00:00 Sending results for scheduled event 'BACKUP'
154
15502/01/2021 21:00:00 AAABBBCCC Error processing '\\SERVER\DRIVELETTER\PATH\PROCESS' the object is in use by another process
15602/01/2021 21:00:00 AAABBBCCC Error processing '\\SERVER\DRIVELETTER\PATH\File.txt' the object is in use by another process
15702/01/2021 21:00:00 AAABBBCCC Incremental backup of '\\SERVER\DRIVEPATH\ finished with 2 error(s)
158
159Status: Stopped
160Name: TSM Scheduler Service
161DisplayName: TSM Scheduler Service
162PSComputerName: SERVER
163
164Status: Stopped
165Name: TSM Scheduler Service for SERVER
166DisplayName: TSM Scheduler Service
167PSComputerName: SERVER
168
169Status: Running
170Name: TSM Scheduler Service
171DisplayName: TSM Scheduler Service
172PSComputerName: SERVER
173
174Status: Running
175Name: TSM Scheduler Service for SERVER
176DisplayName: TSM Scheduler Service
177PSComputerName: SERVER
178
Current output #1
1Get-Content "$((Get-Location).Drive.Name):\Test\file.log" | Select-Object -Last 100 | Select-String -Pattern "SCHEDULEREC STATUS BEGIN" -Context 0,24
2function Start-WindowsTSMBackup {
3 <#
4 .Synopsis
5 Standard backup process for TSM based clients on Windows
6
7 .Description
8 This script helps in going through the required tasks of initial troubleshooting remotely on a remote computer.
9
10 .Parameter ComputerName
11 Server name for which you want to run the TSM process on; you will be prompted to input
12
13 .Parameter Output
14 Change the location of "$Output" to where you want the success/failure texts to go
15
16 .Parameter Credential
17 Allows usage over domain/local networks that require password authentication
18
19 .Notes
20 NAME: Start-WindowsTSMBackup.ps1
21 AUTHOR: mm079
22
23 #>
24
25 [cmdletbinding()]
26 param(
27 [ValidateScript({Test-Connection -ComputerName $_ -Quiet -Count 1})]
28 [parameter(Mandatory=$true)] # Will request the input of the computername
29 [string]$ComputerName,
30
31 # Output directory for the results
32 [string]$Output = "C:\Temp\$ComputerName Results.txt",
33
34 <#
35 # Use this to submit credentials automatically
36 [string]$Username = "domain\username",
37 [string]$Password = "Pa$$w0rd",
38 [securestring]$SecurePassword = ( $Password | ConvertTo-SecureString -AsPlainText -Force),
39 [pscredential]$Credential = (New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $Username,$SecurePassword),
40 #>
41
42 # Use this to submit credentials by requested input
43 [System.Management.Automation.PSCredential]$Credential = (Get-Credential)
44 ) # Param
45
46 BEGIN {
47
48 } # BEGIN
49
50 PROCESS {
51
52 foreach ($Computer in $ComputerName) {
53
54 if (Test-Connection -ComputerName $Computer -Count 1 -ea 0) {
55 try {
56 $Results = invoke-command -script {
57
58 #Obtain verbose information for testing
59 VerbosePreference = "continue"
60
61 #Query the server for basic information - Hostname and Current time
62 Write-Output $env:COMPUTERNAME
63 Write-Output ('Current Data/Time: ' + (Get-Date) )
64
65 # Retrieve the last boot time
66 SystemInfo | find /i "Boot Time"
67
68 # TSM Service Last Restart Time
69 Write-Output ('TSM Service Last Restart Time: ' + (Get-EventLog -LogName "System" -Source "Service Control Manager" -EntryType "Information" -Message "*TSM Scheduler Service*running*" -Newest 1).TimeGenerated )
70 Write-Verbose "Basic Information processed"
71
72
73 if (Test-Path "$((Get-Location).Drive.Name):\Program Files\Tivoli\TSM\baclient") {
74
75 # Obtains DSMSched information
76 Get-Content "$((Get-Location).Drive.Name):\Program Files\Tivoli\TSM\baclient\dsmsched.log" | Select-Object -Last 100 | Select-String -Pattern "SCHEDULEREC STATUS BEGIN" -Context 0,24
77 Write-Verbose "DSMSched Information processed"
78
79 # Obtains DSMErrors
80 Get-Content "$((Get-Location).Drive.Name):\Program Files\Tivoli\TSM\baclient\dsmerror.log" | Select-Object -Last 200 | Select-String -Pattern "Error", "failure"
81 Write-Verbose "DSMError Information processed"
82
83 } # if
84 else {
85
86 Write-Output "File path to TSM logs not found"
87 Write-Verbose "Unable to process TSM information due to invalid path - not found"
88 } # else
89
90 ## Restart the "TSM Scheduler Service"
91 if (Get-Service -Name "TSM*" -ErrorAction SilentlyContinue)
92 {
93 Stop-Service -Name "TSM*" -PassThru
94 Start-Service -Name "TSM*" -PassThru
95 Write-Verbose "TSM Service processed"
96 }
97 else
98 {
99 Write-Output "Could not restart TSM Service"
100 Write-Verbose "TSM Service restart failed"
101 }
102
103 } # invoke-command
104 -ComputerName $Computer -Credential $Credential -ErrorAction Stop
105 } # try
106 catch {
107
108 Write-Error "$($_.Exception.Message) - Line Number: $($_.InvocationInfo.ScriptLineNumber)"
109
110 } # catch
111 } # if
112
113 } # foreach
114
115
116
117 } # PROCESS
118 END {
119
120 } # END
121
122
123} # Function Start-WindowsTSMBackup
124EXAMPLESERVERNAME
125Current Data/Time: 02/02/2021 14:27:00
126System Boot Time: 01/15/2021 12:00:00 AM
127TSM Service Last Restart Time: 02/01/2021 16:27:21
128
12902/01/2021 21:00:00 --- SCHEDULEREC STATUS BEGIN
13002/01/2021 21:00:00 Total number of objects inspected: ####
13102/01/2021 21:00:00 Total number of objects backed up: ####
13202/01/2021 21:00:00 Total number of objects updated: ####
13302/01/2021 21:00:00 Total number of objects rebound: ####
13402/01/2021 21:00:00 Total number of objects deleted: ####
13502/01/2021 21:00:00 Total number of objects expired: ####
13602/01/2021 21:00:00 Total number of objects failed: ####
13702/01/2021 21:00:00 Total number of objects encrypted: ####
13802/01/2021 21:00:00 Total number of subfile objects: ####
13902/01/2021 21:00:00 Total number of objects grew: ####
14002/01/2021 21:00:00 Total number of retries: ####
14102/01/2021 21:00:00 Total number of bytes inspected: ####
14202/01/2021 21:00:00 Total number of bytes transferred: ####
14302/01/2021 21:00:00 Data transfer time: ####
14402/01/2021 21:00:00 Network data transfer rate: ####
14502/01/2021 21:00:00 Aggregate data transfer rate: ####
14602/01/2021 21:00:00 Objects compressed by: ####
14702/01/2021 21:00:00 Total data reduction ratio: ####
14802/01/2021 21:00:00 Subfile objects reduced by: ####
14902/01/2021 21:00:00 Elapsed processing time: ####
15002/01/2021 21:00:00 --- SCHEDULEREC STATUS END
15102/01/2021 21:00:00 --- SCHEDULEREC OBJECT END BACKUP 02/01/2021 21:00:00
15202/01/2021 21:00:00 Scheduled event 'BACKUP' completed successfully.
15302/01/2021 21:00:00 Sending results for scheduled event 'BACKUP'
154
15502/01/2021 21:00:00 AAABBBCCC Error processing '\\SERVER\DRIVELETTER\PATH\PROCESS' the object is in use by another process
15602/01/2021 21:00:00 AAABBBCCC Error processing '\\SERVER\DRIVELETTER\PATH\File.txt' the object is in use by another process
15702/01/2021 21:00:00 AAABBBCCC Incremental backup of '\\SERVER\DRIVEPATH\ finished with 2 error(s)
158
159Status: Stopped
160Name: TSM Scheduler Service
161DisplayName: TSM Scheduler Service
162PSComputerName: SERVER
163
164Status: Stopped
165Name: TSM Scheduler Service for SERVER
166DisplayName: TSM Scheduler Service
167PSComputerName: SERVER
168
169Status: Running
170Name: TSM Scheduler Service
171DisplayName: TSM Scheduler Service
172PSComputerName: SERVER
173
174Status: Running
175Name: TSM Scheduler Service for SERVER
176DisplayName: TSM Scheduler Service
177PSComputerName: SERVER
178EXAMPLESERVERNAME
179Current Data/Time: 02/02/2021 14:27:00
180System Boot Time: 01/15/2021 12:00:00 AM
181TSM Service Last Restart Time: 02/01/2021 16:27:21
182
18302/01/2021 21:00:00 --- SCHEDULEREC STATUS BEGIN
18402/01/2021 21:00:00 AAABBBCCC Error processing '\\SERVER\DRIVELETTER\PATH\PROCESS' the object is in use by another process
18502/01/2021 21:00:00 AAABBBCCC Error processing '\\SERVER\DRIVELETTER\PATH\File.txt' the object is in use by another process
18602/01/2021 21:00:00 AAABBBCCC Incremental backup of '\\SERVER\DRIVEPATH\ finished with 2 error(s)
187
188Status: Stopped
189Name: TSM Scheduler Service
190DisplayName: TSM Scheduler Service
191PSComputerName: SERVER
192
193Status: Stopped
194Name: TSM Scheduler Service for SERVER
195DisplayName: TSM Scheduler Service
196PSComputerName: SERVER
197
198Status: Running
199Name: TSM Scheduler Service
200DisplayName: TSM Scheduler Service
201PSComputerName: SERVER
202
203Status: Running
204Name: TSM Scheduler Service for SERVER
205DisplayName: TSM Scheduler Service
206PSComputerName: SERVER
207
Current output #2
1Get-Content "$((Get-Location).Drive.Name):\Test\file.log" | Select-Object -Last 100 | Select-String -Pattern "SCHEDULEREC STATUS BEGIN" -Context 0,24
2function Start-WindowsTSMBackup {
3 <#
4 .Synopsis
5 Standard backup process for TSM based clients on Windows
6
7 .Description
8 This script helps in going through the required tasks of initial troubleshooting remotely on a remote computer.
9
10 .Parameter ComputerName
11 Server name for which you want to run the TSM process on; you will be prompted to input
12
13 .Parameter Output
14 Change the location of "$Output" to where you want the success/failure texts to go
15
16 .Parameter Credential
17 Allows usage over domain/local networks that require password authentication
18
19 .Notes
20 NAME: Start-WindowsTSMBackup.ps1
21 AUTHOR: mm079
22
23 #>
24
25 [cmdletbinding()]
26 param(
27 [ValidateScript({Test-Connection -ComputerName $_ -Quiet -Count 1})]
28 [parameter(Mandatory=$true)] # Will request the input of the computername
29 [string]$ComputerName,
30
31 # Output directory for the results
32 [string]$Output = "C:\Temp\$ComputerName Results.txt",
33
34 <#
35 # Use this to submit credentials automatically
36 [string]$Username = "domain\username",
37 [string]$Password = "Pa$$w0rd",
38 [securestring]$SecurePassword = ( $Password | ConvertTo-SecureString -AsPlainText -Force),
39 [pscredential]$Credential = (New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $Username,$SecurePassword),
40 #>
41
42 # Use this to submit credentials by requested input
43 [System.Management.Automation.PSCredential]$Credential = (Get-Credential)
44 ) # Param
45
46 BEGIN {
47
48 } # BEGIN
49
50 PROCESS {
51
52 foreach ($Computer in $ComputerName) {
53
54 if (Test-Connection -ComputerName $Computer -Count 1 -ea 0) {
55 try {
56 $Results = invoke-command -script {
57
58 #Obtain verbose information for testing
59 VerbosePreference = "continue"
60
61 #Query the server for basic information - Hostname and Current time
62 Write-Output $env:COMPUTERNAME
63 Write-Output ('Current Data/Time: ' + (Get-Date) )
64
65 # Retrieve the last boot time
66 SystemInfo | find /i "Boot Time"
67
68 # TSM Service Last Restart Time
69 Write-Output ('TSM Service Last Restart Time: ' + (Get-EventLog -LogName "System" -Source "Service Control Manager" -EntryType "Information" -Message "*TSM Scheduler Service*running*" -Newest 1).TimeGenerated )
70 Write-Verbose "Basic Information processed"
71
72
73 if (Test-Path "$((Get-Location).Drive.Name):\Program Files\Tivoli\TSM\baclient") {
74
75 # Obtains DSMSched information
76 Get-Content "$((Get-Location).Drive.Name):\Program Files\Tivoli\TSM\baclient\dsmsched.log" | Select-Object -Last 100 | Select-String -Pattern "SCHEDULEREC STATUS BEGIN" -Context 0,24
77 Write-Verbose "DSMSched Information processed"
78
79 # Obtains DSMErrors
80 Get-Content "$((Get-Location).Drive.Name):\Program Files\Tivoli\TSM\baclient\dsmerror.log" | Select-Object -Last 200 | Select-String -Pattern "Error", "failure"
81 Write-Verbose "DSMError Information processed"
82
83 } # if
84 else {
85
86 Write-Output "File path to TSM logs not found"
87 Write-Verbose "Unable to process TSM information due to invalid path - not found"
88 } # else
89
90 ## Restart the "TSM Scheduler Service"
91 if (Get-Service -Name "TSM*" -ErrorAction SilentlyContinue)
92 {
93 Stop-Service -Name "TSM*" -PassThru
94 Start-Service -Name "TSM*" -PassThru
95 Write-Verbose "TSM Service processed"
96 }
97 else
98 {
99 Write-Output "Could not restart TSM Service"
100 Write-Verbose "TSM Service restart failed"
101 }
102
103 } # invoke-command
104 -ComputerName $Computer -Credential $Credential -ErrorAction Stop
105 } # try
106 catch {
107
108 Write-Error "$($_.Exception.Message) - Line Number: $($_.InvocationInfo.ScriptLineNumber)"
109
110 } # catch
111 } # if
112
113 } # foreach
114
115
116
117 } # PROCESS
118 END {
119
120 } # END
121
122
123} # Function Start-WindowsTSMBackup
124EXAMPLESERVERNAME
125Current Data/Time: 02/02/2021 14:27:00
126System Boot Time: 01/15/2021 12:00:00 AM
127TSM Service Last Restart Time: 02/01/2021 16:27:21
128
12902/01/2021 21:00:00 --- SCHEDULEREC STATUS BEGIN
13002/01/2021 21:00:00 Total number of objects inspected: ####
13102/01/2021 21:00:00 Total number of objects backed up: ####
13202/01/2021 21:00:00 Total number of objects updated: ####
13302/01/2021 21:00:00 Total number of objects rebound: ####
13402/01/2021 21:00:00 Total number of objects deleted: ####
13502/01/2021 21:00:00 Total number of objects expired: ####
13602/01/2021 21:00:00 Total number of objects failed: ####
13702/01/2021 21:00:00 Total number of objects encrypted: ####
13802/01/2021 21:00:00 Total number of subfile objects: ####
13902/01/2021 21:00:00 Total number of objects grew: ####
14002/01/2021 21:00:00 Total number of retries: ####
14102/01/2021 21:00:00 Total number of bytes inspected: ####
14202/01/2021 21:00:00 Total number of bytes transferred: ####
14302/01/2021 21:00:00 Data transfer time: ####
14402/01/2021 21:00:00 Network data transfer rate: ####
14502/01/2021 21:00:00 Aggregate data transfer rate: ####
14602/01/2021 21:00:00 Objects compressed by: ####
14702/01/2021 21:00:00 Total data reduction ratio: ####
14802/01/2021 21:00:00 Subfile objects reduced by: ####
14902/01/2021 21:00:00 Elapsed processing time: ####
15002/01/2021 21:00:00 --- SCHEDULEREC STATUS END
15102/01/2021 21:00:00 --- SCHEDULEREC OBJECT END BACKUP 02/01/2021 21:00:00
15202/01/2021 21:00:00 Scheduled event 'BACKUP' completed successfully.
15302/01/2021 21:00:00 Sending results for scheduled event 'BACKUP'
154
15502/01/2021 21:00:00 AAABBBCCC Error processing '\\SERVER\DRIVELETTER\PATH\PROCESS' the object is in use by another process
15602/01/2021 21:00:00 AAABBBCCC Error processing '\\SERVER\DRIVELETTER\PATH\File.txt' the object is in use by another process
15702/01/2021 21:00:00 AAABBBCCC Incremental backup of '\\SERVER\DRIVEPATH\ finished with 2 error(s)
158
159Status: Stopped
160Name: TSM Scheduler Service
161DisplayName: TSM Scheduler Service
162PSComputerName: SERVER
163
164Status: Stopped
165Name: TSM Scheduler Service for SERVER
166DisplayName: TSM Scheduler Service
167PSComputerName: SERVER
168
169Status: Running
170Name: TSM Scheduler Service
171DisplayName: TSM Scheduler Service
172PSComputerName: SERVER
173
174Status: Running
175Name: TSM Scheduler Service for SERVER
176DisplayName: TSM Scheduler Service
177PSComputerName: SERVER
178EXAMPLESERVERNAME
179Current Data/Time: 02/02/2021 14:27:00
180System Boot Time: 01/15/2021 12:00:00 AM
181TSM Service Last Restart Time: 02/01/2021 16:27:21
182
18302/01/2021 21:00:00 --- SCHEDULEREC STATUS BEGIN
18402/01/2021 21:00:00 AAABBBCCC Error processing '\\SERVER\DRIVELETTER\PATH\PROCESS' the object is in use by another process
18502/01/2021 21:00:00 AAABBBCCC Error processing '\\SERVER\DRIVELETTER\PATH\File.txt' the object is in use by another process
18602/01/2021 21:00:00 AAABBBCCC Incremental backup of '\\SERVER\DRIVEPATH\ finished with 2 error(s)
187
188Status: Stopped
189Name: TSM Scheduler Service
190DisplayName: TSM Scheduler Service
191PSComputerName: SERVER
192
193Status: Stopped
194Name: TSM Scheduler Service for SERVER
195DisplayName: TSM Scheduler Service
196PSComputerName: SERVER
197
198Status: Running
199Name: TSM Scheduler Service
200DisplayName: TSM Scheduler Service
201PSComputerName: SERVER
202
203Status: Running
204Name: TSM Scheduler Service for SERVER
205DisplayName: TSM Scheduler Service
206PSComputerName: SERVER
207EXAMPLESERVERNAME
208Current Data/Time: 02/02/2021 14:27:00
209System Boot Time: 01/15/2021 12:00:00 AM
210TSM Service Last Restart Time: 02/01/2021 16:27:21
211
212
21302/01/2021 21:00:00 AAABBBCCC Error processing '\\SERVER\DRIVELETTER\PATH\PROCESS' the object is in use by another process
21402/01/2021 21:00:00 AAABBBCCC Error processing '\\SERVER\DRIVELETTER\PATH\File.txt' the object is in use by another process
21502/01/2021 21:00:00 AAABBBCCC Incremental backup of '\\SERVER\DRIVEPATH\ finished with 2 error(s)
216
217Status: Stopped
218Name: TSM Scheduler Service
219DisplayName: TSM Scheduler Service
220PSComputerName: SERVER
221
222Status: Stopped
223Name: TSM Scheduler Service for SERVER
224DisplayName: TSM Scheduler Service
225PSComputerName: SERVER
226
227Status: Running
228Name: TSM Scheduler Service
229DisplayName: TSM Scheduler Service
230PSComputerName: SERVER
231
232Status: Running
233Name: TSM Scheduler Service for SERVER
234DisplayName: TSM Scheduler Service
235PSComputerName: SERVER
236
Any suggestions are greatly appreciated, thank you.
ANSWER
Answered 2021-Mar-21 at 14:49Unfortunately I do not have a true solution to the issue, instead a re-work was done to the entire code and then implemented within BladeLogic to make it work as needed. As such an answer to this is no longer needed. Thank you to those who attempted to assist.
Community Discussions contain sources that include Stack Exchange Network
Tutorials and Learning Resources in Incremental Backup
Tutorials and Learning Resources are not available at this moment for Incremental Backup