Install S3fs on Linux
How to Install S3fs and Mount S3 Bucket on an Linux server
S3fs is a Linux tool that can be managed to mount your S3 buckets on the Ubuntu filesystem and use your S3 buckets as a network drive. S3fs is a fuse-based file system backed by Amazon S3. It allows you to mount an S3 bucket and make it appear as local storage on a server. Obviously don't expect the same performance as local disk, but it is a great way to add unlimited storage at a reasonable price.
In this tutorial, you will learn how to install S3fs and mount an S3 bucket on Ubuntu, commands for other Linux flavors will differ.
Requirements
- AWS Account. Create your own AWS Account
- Command-line access with a user with sudo privilege.
Step 1. Install S3fs on
Open terminal console on your system and SSH remote into your EC2 Ubuntu server. Update your system repository, run the command.
After installation update completed, type command below to install S3fs on your system.
sudo apt-get update
sudo apt install s3fs awscli -y
Step 2. Create the S3 bucket and configure access
In the AWS Console, create a new S3 bucket. We'll name ours "MyS3Bucket" but pick your own name.
In the IAM console, create a new user, and select "Access key - Programmatic access", make sure to download the Access key ID and Secret key. Create a new IAM policy granting that user access to that bucket (see example below) and attach it to your user.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::MyS3Bucket"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": ["arn:aws:s3:::MyS3Bucket/*"]
}
]
}
Step 3. Create the S3fs Credentials file
Switch back to the server's console, and create a file to manage your IAM user access ID and secret key, use the command (substituting ACCESS_KEY_ID and SECRET_ACCESS_KEY with the valued from the previous step).
You need to also make sure to secure the file credentials by setting the correct access permissions.
Finally, we'll also create a mount point directory, we'll call ours "backup":
echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > /home/ubuntu/.s3fs-creds
chmod 600 /home/ubuntu/.s3fs-creds
mkdir /home/ubuntu/backup
Step 4. Automount S3FS in Fstab
In order to have the disk mounted every time you reboot the server, it is recommended to add an entry to the fstab file. Be careful when editing this file as accidental changes to the rood volume entry in that file will prevent your server from booting.
Edit the fstab file and add the entry below at the end of that file, substituting MyS3Bucket with the name of your bucket:
sudo nano /etc/fstab
...
s3fs#MyS3Bucket /home/ubuntu/backup fuse _netdev,allow_other,passwd_file=/home/ubuntu/.s3fs-creds 0 0
This more complex example used an S3 bucket in the us-west-1 AWS Region, uses a cache and also sets the default uid and gid for the mounted volume.
s3fs#MyS3Bucket /home/ubuntu/backup fuse _netdev,allow_other,use_cache=/tmp/cache,url=http://s3-us-west-1.amazonaws.com,passwd_file=/home/ubuntu/.s3fs-creds,defaults,uid=ubuntu,gid=ubuntu 0 0
Step 5. Mount the drive and Test uploading files
Open the mount point directory, type command:
cd /home/ubuntu/s3_uploadsCopy
And create dummy files using touch command.
sudo mount /home/ubuntu/backup
df -h
touch /home/ubuntu/backup/file{1..10}.txt
ls -al /home/ubuntu/backup
The previous commands will create a few empty files in that directory. To verify that the files were indeed created in S3, use the AWS console to view the files via the S3 console.
View the system logs to troubleshoot any potential issues if the drive didn't mount. You may also run into some permissions issues, see this article on configuring default permissions: Plex Media Server.
A few additional useful commands:
#Display the system logs
tail -f /var/log/syslog
#Dismount the drive
sudo umount /home/ubuntu/backup