I have since come up with a better solution using Crashplan rather than hacking Synology.
This article is workable, but not up to par in quality to be considered complete.
The requirements outlined here are for,
- Off-site (from where the servers are located) backup for small businesses.
- Double as a centralized storage area for a small or home office
- Provide media functionality for a small home (which also doubles as an additional backup site)
Synology which has an easily hackable system and one of the best software platforms we have seen.
- DS212j using the Marvel Kirkwood mv6281 ARM chipset with 16-bit@DDR2, 256MB of RAM
- Two 3 Terabyte Drives
Synology DS212j Setup
Perform the following Update
- update firmware
- sync the time server
- disabling cache management (when UPS N/A)
Setup the Volumes
- S.M.A.R.T. test
- enable the home directory for SSH
ipkg is the packaging system for the lightweight debian based linux system provided by Synology.
At a high level, to install ipkg, as root,
- Download and run a script which the community calls a bootstrap file specific to the NAS processor hardware
- Modify .profile to include ipkg in the path
The following procedure was successful with DSM 4.0-2233.
Determine the processor of your NAS. The DS212j uses the Marvel Kirkwood mv6281 ARM chipset with 16-bit@DDR2, 256MB of RAM.
Not sure why by default the links point to the unstable directory. However, at least for the version used here the bootstrap in unstable and stable are identical.
Instead following the normal bootstrap installation instructions, log in through ssh as root and download the boostrap,
Make sure to download the boostrap that matches the NAS processor hardware!
Run the installer,
Edit the root account's .profile file and ensure the /opt/bin is located at the beginning of the path,
You final bash should look like this,
Log out and the terminal and log back in as root,
Verify ipkg is working and at the same time update the package list,
The ipkg update dialog will show the repository being used. In the above example, load a browser and go to http://ipkg.nslu2-linux.org/feeds/optware/cs08q1armel/cross/unstable/ to see the list of software available for installation on their site.
Installing packages with ipkg is similar to using apt-get with Debian or Ubuntu. Synology keeps a manual online for reference. You should though be able to get by with the following common commands,
With this setup, we install the following,
Common Errors Installing Packages
The reason for this error is that you are not logged in as root.
Setup Remote Backup User
Rather than using root to pull down data from other system we will use remotebackup.
The remotebackup user could not be created using the shell because it was not possible to add the user to groups, change the password or specify a UID upon creation of the user. The user was not recognized by the system. So Roderick did you just use the UI? Then you can't define the uid.
Create the backup group in the command line and give it the GID of 34 to follow the ubuntu standard,
Instead, add the remotebackup user manually by editing the /etc/group file
Now we change the UID to 3001 following bonsaiframework standard and give shell access for remotebackup by editing the /etc/passwd file but do not forget to backup first
To allow serveradmin to login, change the default shell from nologin to /bin/sh. The chsh command is not currently available on the package site so you must edit edit the passwd file manually,
We also need to change the environment home folder,
Verify that serveradmin can log in,
Add the private keys to remotebackup required to log into other systems to transfer backups.
Creating the Backup Destination
The backup destination will only have r/w access by remotebackup
Note that the ash is the default shell. Synergy selected ash because it is a lightweight version of bash and generally compatible.
Creating the rsync script
To test run scripts the command is
Adding the Cronjob
To add the script to cron edit the crontab located in /etc/crontab make sure you are root.
If you are unsure of how to schedule time view the cron page
When adding or removing commands to the crontab make sure to restart the crond service so the commands take effect.
To view the logs to see if your cronjob was run is located in /var/log/messages
Connecting with Clients
Mac OS X Auto Mount
Open Finder and click DiskStation on the left tab.
Look to top right and click button "Connect As..."
Mac OS X Hidden Mount with GUI
To mount a hidden share as a specific user perform the following steps. It is assumed that in DiskStation Mac file service has been enabled.
Connect to Server
Use the key combination command-k or choose Go > Connect to Server from the menu bar.
Type the following,
Type Synology DiskStation's IP address or server name proceeded by smb:// or afp://, the id of the user to log in with, the share path and click Connect,
Put here which protocol (smb or afp) is better to use.
According to the Synology website, it is better performance, it is recommended that you connect to the shared folders via SMB.
Enter the user credentials with authentication to access the shared folder. And then click Connect to connect to the shared folder.
Now this network share will not show up in the SHARED listing in the file manager. Instead look for the share in the /Volumes folder. In this example, /Volumes/myshare.private/.
Mac OS X Mount Hidden Share with CLI
The advantage of the CLI (Command Line Interface) is that is it not obvious to another casual user that you had mounted the hidden share and (I got to research this) you can delete the history entries quickly and remove all traces of the private share.
The command mount_smbfs is a wrapper for "mount -t smbfs" so the following command sequence will also work, though according to the man page for mount_smbfs we should use mount -t,
Isn't there a way to not have to manually make the directory before mounting?
Mac OS X Unmount Hidden Share with GUI
Do not know how to do this yet. Please share if you do.
Mac OS X Unmount Hidden Share with CLI
To be extra secure, unmount your hidden share when you have finished using it. Go to the command line and use the umount command. In this example it would be,
How to clear history of last command - http://thoughtsbyclayg.blogspot.ca/2008/02/how-to-delete-last-command-from-bash.html
Linux Mount with CLI
Specifically tried with Lubuntu,
Improving the Automatic Backup
- Progress log
- Start and stop process times
- Time span
- File integrity - CRC checks
- Emergency Alerts
- Security restricting terminal access and permissions to remotebackup
- Scalability - backup files that get too large
It turns out we can not create our own users with specific UIDs under 1024... so that makes backing up and restoring with proper UIDs a bit more challenging. Maybe storing and then restoring UIDs during and after backup.
CrashPlan GUI with Ubuntu Desktop
If you want to use the GUI with Ubuntu Desktop,
Transfer speed fix test
VNC Autostart - http://blog.johngoulah.com/2013/01/ditching-vino-for-x11vnc/
Auto Mounting - https://help.ubuntu.com/community/Autofs
File Transfer Speed Test - http://askubuntu.com/questions/17275/progress-and-speed-with-cp