Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Tip: enable F2FS compression on Raspberry Pi
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Gentoo on ARM
View previous topic :: View next topic  
Author Message
ross_cc
n00b
n00b


Joined: 29 Jul 2020
Posts: 17
Location: Manila

PostPosted: Wed Jul 29, 2020 5:01 am    Post subject: Tip: enable F2FS compression on Raspberry Pi Reply with quote

Starting from linux 5.6, F2FS has added file-system compression support. This is a succinct intro to use this newest feature on your Pi:

1. Build the kernel for version 5.6+ (5.7+ for zstd compression support) for both your host machine and Pi, and make sure to enable the options for F2FS compression support.

2. Build f2fs-tools. Since the version on gentoo tree (sys-fs/f2fs-tools-1.13.0) is outdated and does not support making compression-enabled filesystems, you have to build directly from the upstream source: https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs-tools.git.

3. Format the designed partition with compression attribute enabled:
Code:
mkfs.f2fs -O extra_attr,compression /dev/<partition> -f
. (this will erase everything on this partition, so make sure you have needed files backed up beforehand)

4. Now you can mount the partition with compression enabled:
Code:
mount -o compress_algorithm=lz4/lzo/zstd /dev/<partition> /mnt/<mountpoint>
. (now it's the time you can restore the backed up files)

5. Modify /etc/fstab and append
Code:
,compress_algorithm=lz4/lzo/zstd
to the 4th column on the corresponding partition.

6. (If the partition happens to serve as rootfs) Append
Code:
rootfstype=f2fs rootflags=compress_algorithm=lz4/lzo/zstd
on cmdline.txt.

Feel free to ask me to elaborate if you have any apprehensions:-)
Back to top
View user's profile Send private message
Antikapitalista
n00b
n00b


Joined: 18 Apr 2011
Posts: 25

PostPosted: Tue Sep 08, 2020 9:30 am    Post subject: Reply with quote

And does it actually make any difference?
The code is architecture-agnostic, I have not tried this on anything ARM-related, only on AMD64 so far, the partition can be formatted and mounted properly, using zstd compression, enabled in the kernel; everything seemingly works, the compression attribute can be set and is inherited,etc., but there are no noticeable space savings; tested with plain text files as well as object code files.
So far, f2fs compression seems like snake oil to me.
_________________
On a warpath against the North Atlantic Terrorist Organization.
Back to top
View user's profile Send private message
Ant P.
Watchman
Watchman


Joined: 18 Apr 2009
Posts: 6921

PostPosted: Tue Sep 08, 2020 9:59 am    Post subject: Reply with quote

zstd makes a very significant difference on btrfs, it should be the same for f2fs:
Code:
/usr/share # compsize .
Processed 87689 files, 39573 regular extents (40152 refs), 60656 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       62%      1.8G         3.0G         3.0G
none       100%      1.1G         1.1G         1.1G
zstd        39%      760M         1.8G         1.9G
Back to top
View user's profile Send private message
Antikapitalista
n00b
n00b


Joined: 18 Apr 2011
Posts: 25

PostPosted: Sun Sep 19, 2021 1:40 pm    Post subject: Reply with quote

Ant P. wrote:
zstd makes a very significant difference on btrfs, it should be the same for f2fs:

Actually, it is probably very far from that, unless it has been changed in the meantime.

I was curious as to why there were seemingly negative space savings even after turning on the compression, i.e. the data seemed to took up even more space afterwards, so I looked at the source code and found out that F2FS does support compression, indeed, but only to speed up reading the data from a slow storage medium.

Thus, F2FS compression does not result in fewer blocks allocated for the data, but in holes in the allocated space.

Of course, compression works as expected in Btrfs. While it is reportedly not so well-tested⁠—and in my case it quite often produces rather scary-looking messages... and almost every power outage incident makes it unmountable without clearing the log⁠—but I have a setup with block groups with mixed data and metadata.

Reiser4 would be an awesome contender... if it supported extended attributes and eventually did not hang... but is otherwise bloody fast even on an SSD (Optane), it would certainly pull ahead even more on rotational media, so when I have more time I ought to look into it because Edward Shishkin seems to have been doing little more than rebasing his patch lately (and the latest one even had a small error).

Bu, honestly, I do not see a reason to go with F2FS on a proper SSD when Btrfs is so much better in every regard. F2FS is simpler and may be better for such media as memory cards which have a controller and a flash translation layer, thumb drives and the like.
_________________
On a warpath against the North Atlantic Terrorist Organization.


Last edited by Antikapitalista on Thu Sep 23, 2021 3:17 pm; edited 1 time in total
Back to top
View user's profile Send private message
Goverp
Veteran
Veteran


Joined: 07 Mar 2007
Posts: 1289

PostPosted: Mon Sep 20, 2021 8:43 am    Post subject: Reply with quote

[quote="Antikapitalista"]
Ant P. wrote:

I was curious as to why there were seemingly negative space savings eve after turning on the compression, i.e. the data seemed to took up even more space afterwards, so I looked at the source code and found out that F2FS does support compression, indeed, but only to speed up reading the data from a slow storage medium.

Thus, F2FS compression does not result in fewer blocks allocated for the data, but in holes in the allocated space.
...

Hmm. Interesting. Perhaps the reason the "compression" works this way is that the "holes" don't actually require flash memory committed to them, so it would increase the amount of free SD blocks, but I don't know much about the innards of SD cards. Whatever, on a Raspberry Pi, the subject of this thread, a performance tweak on its SD card I/O would be well worthwhile.
_________________
Greybeard
Back to top
View user's profile Send private message
Antikapitalista
n00b
n00b


Joined: 18 Apr 2011
Posts: 25

PostPosted: Thu Sep 23, 2021 6:39 pm    Post subject: Reply with quote

[quote="Goverp"]
Antikapitalista wrote:
Ant P. wrote:

I was curious as to why there were seemingly negative space savings eve after turning on the compression, i.e. the data seemed to took up even more space afterwards, so I looked at the source code and found out that F2FS does support compression, indeed, but only to speed up reading the data from a slow storage medium.

Thus, F2FS compression does not result in fewer blocks allocated for the data, but in holes in the allocated space.
...

Hmm. Interesting. Perhaps the reason the "compression" works this way is that the "holes" don't actually require flash memory committed to them, so it would increase the amount of free SD blocks, but I don't know much about the innards of SD cards. Whatever, on a Raspberry Pi, the subject of this thread, a performance tweak on its SD card I/O would be well worthwhile.

No, Goverp, it is really such a dumb hack as I described it. How do I know that, or how did I find it out?
I had an XFS file-system on a smallish 32 GB Optane drive.
As time was going by, more and more fancier things were getting installed and I found myself running out of space. I wanted to learn how to do colour correcting and colour grading with DaVinci Resolve, with the intention of introducing the tool to my sister, as I had only dabbled with with colour correcting and colour grading in Adobe SpeedGrade in the early autumn of 2016, using the notorious Blu-Ray extended edition and the respective DVD edition of the Fellowship of the Ring. I was just curious what transformation matrix I would get.

But my sister wanted to do it semi-professionally and publish the results, so using a pirated version was deemed immoral and paying for a Creative Cloud licence was deemed at least at such an early stage.

But the packages were so big that almost no space was left afterwards on the Optane SSD. So I decided to employ compression. I created an image of the file-system and copied it out to a WD My Passport Studio 2TB drive over a 1394b/Firewire connection, so it did not take very long... then I formatted the SSD with various file-systems. The only ones native to Linux and featuring compression in my consideration were: Reiser4, F2FS and Btrfs.

I started with Reiser4. I had to figure out the parameters first, the documentation was rather terse, I wanted to know which plug-ins and parameters were relevant, what they were for and how they actually worked, so I had to look at the source code to understand what they really did. During the process I found out that Reiser4 did not support extended attributes, which was a bummer, but having plodded through the mess, I still wanted to try and see what it can do, and I also wanted to do some rough speed tests, just a simple timing of writing a directory tree to a compressed filesystem. I used ZSTD. And Reiser4 was bloody fast! There was almost no difference between writing to a compressed and uncompressed partition. I think I used the adaptive compression, on a dynamic lattice, as the documentation calls it...
Reiser4 of the unstable v5 branch was the fastest of them all. But perhaps I used too many experimental options, so while I was dumping the contents of the image beck to the SSD, the system locked up. But the speed was still amazing, so maybe I will give it a bit more love in the future and see where it may take me... I already have some experience with patching extended attributes into a modified NILFS2 file-system, so it should not be such a hurdle.

Next came F2FS. Again, I meticulously studied all the switches... the I formatted the partition... and began dumping the contents of the image to the partition... and went out to have my afternoon tea.
When I came back, I could not believe my eyes: a write error, with no space left on the device. What??! The contents of the image would not even fit into the partition!
Naturally, I had to investigate the cause. I started with copying only the contents of the /usr/bin subdirectory. I made sure that the superdirectories are properly compressed, which means having the compression attribute enabled, I did it manually, just to be sure. After dumping, I checked out that the files were all "compressed" but without any space savings! Well, at this point F2FS compression was beginning to look either the emperor's new clothes, or like snake oil. I wanted to find out which was close to the truth, so I looked at the source code... and it was more like snake oil. It was there, but under no circumstances could it save any blocks; it lacked such an allocator.

I threw away F2FS with disgust and delightfully formatted the partition with Btrfs. With mixed allocation block groups and at the maximum compression level, 15, the data occupied only about 3/5 to 2/3 of the partition, 1/3 to 2/5 of it was free space.
_________________
On a warpath against the North Atlantic Terrorist Organization.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo on ARM All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum