Forwarded from farseerfc 😂
Philipp's Tech Blog
How-To: Using ZFS Encryption at Rest in OpenZFS (ZFS on Linux, ZFS on FreeBSD, ...) - Philipp's Tech Blog
An upcoming feature of OpenZFS (and ZFS on Linux, ZFS on FreeBSD, …) is At-Rest Encryption, a feature that allows you to securely encrypt your ZFS file systems and volumes without having to provide an extra layer of devmappers and such. To give you a brief…
fc fs筆記
JRTipton_ReFS_v2.pdf
ReFS v2: Cloning, Projecting, and Moving Data
File systems are fundamentally about wrapping abstractions around data: files are really just named data blocks. ReFS v2 presents just a couple new abstractions that open up greater control for applications and virtualization.
We'll cover block projection and cloning as well as in-line data tiering. Block projection makes it easy to efficiently build simple concepts like file splitting and copying as well as more complex ones like efficient VM snapshots. Inline data tiering brings efficient data tiering to virtualization and OLTP workloads.
File systems are fundamentally about wrapping abstractions around data: files are really just named data blocks. ReFS v2 presents just a couple new abstractions that open up greater control for applications and virtualization.
We'll cover block projection and cloning as well as in-line data tiering. Block projection makes it easy to efficiently build simple concepts like file splitting and copying as well as more complex ones like efficient VM snapshots. Inline data tiering brings efficient data tiering to virtualization and OLTP workloads.
https://lore.kernel.org/linux-btrfs/[email protected]/T/
How robust is BTRFS?
This is a testimony from a BTRFS-user.
For a little more than 6 months, I had my server running on BTRFS.
My setup was several RAID-10 partitions.
As my server was located on a remote island and I was about to leave, I just added two more harddisks, to make sure that the risk of failure would be minimal. Now I had four WD10JFCX on the EspressoBIN server running Ubuntu Bionic Beaver.
Before I left, I *had* noticed some beep-like sounds coming from one of the drives, but it seemed OK, so I didn't bother with it.
So I left, and 6 months later, I noticed that one of my 'partitions' were failing, so I thought I might go back and replace the failing drive. The journey takes 6 hours.
When I arrived, I noticed more beep-like sounds than when I left half a year earlier.
But I was impressed that my server was still running.
I decided to make a backup and re-format all drives, etc.
The drives were added in one-by-one, and I noticed that when I added the third drive, again I started hearing that sound I disliked so much.
After replacing the port-multiplier, I didn't notice any difference.
"The power supply!" I thought.. Though it's a 3A PSU and should easily handle four 2.5" WS10JFCX drives, it could be that the specs were possibly a little decorated, so I found myself a MeanWell IRM-60-5ST supply and used that instead.
Still the same noise.
I then investigated all the cables; lo and behold, silly me had used a china-pigtail for a barrel-connector, where the wires on the pigtail were so incredibly thin that they could not carry the current, resulting in the voltage being lowered the more drives I added.
I re-did my power cables and then everything worked well.
...
After correcting the problem, I got curious and listed the statistics for each partition.
I had more than 100000 read/write errors PER DAY for 6 months.
That's around 18 million read/write-errors, caused by drives turning on/off "randomly".
AND ALL MY FILES WERE INTACT.
This is on the border to being impossible.
I believe that no other file system would be able to survive such conditions.
-And the developers of this file system really should know what torture it's been through without failing.
Yes, all files were intact. I tested all those files that I had backed up 6 months earlier against against those that were on the drives; there were no differences - they were binary identical.
How robust is BTRFS?
This is a testimony from a BTRFS-user.
For a little more than 6 months, I had my server running on BTRFS.
My setup was several RAID-10 partitions.
As my server was located on a remote island and I was about to leave, I just added two more harddisks, to make sure that the risk of failure would be minimal. Now I had four WD10JFCX on the EspressoBIN server running Ubuntu Bionic Beaver.
Before I left, I *had* noticed some beep-like sounds coming from one of the drives, but it seemed OK, so I didn't bother with it.
So I left, and 6 months later, I noticed that one of my 'partitions' were failing, so I thought I might go back and replace the failing drive. The journey takes 6 hours.
When I arrived, I noticed more beep-like sounds than when I left half a year earlier.
But I was impressed that my server was still running.
I decided to make a backup and re-format all drives, etc.
The drives were added in one-by-one, and I noticed that when I added the third drive, again I started hearing that sound I disliked so much.
After replacing the port-multiplier, I didn't notice any difference.
"The power supply!" I thought.. Though it's a 3A PSU and should easily handle four 2.5" WS10JFCX drives, it could be that the specs were possibly a little decorated, so I found myself a MeanWell IRM-60-5ST supply and used that instead.
Still the same noise.
I then investigated all the cables; lo and behold, silly me had used a china-pigtail for a barrel-connector, where the wires on the pigtail were so incredibly thin that they could not carry the current, resulting in the voltage being lowered the more drives I added.
I re-did my power cables and then everything worked well.
...
After correcting the problem, I got curious and listed the statistics for each partition.
I had more than 100000 read/write errors PER DAY for 6 months.
That's around 18 million read/write-errors, caused by drives turning on/off "randomly".
AND ALL MY FILES WERE INTACT.
This is on the border to being impossible.
I believe that no other file system would be able to survive such conditions.
-And the developers of this file system really should know what torture it's been through without failing.
Yes, all files were intact. I tested all those files that I had backed up 6 months earlier against against those that were on the drives; there were no differences - they were binary identical.
https://www.youtube.com/watch?v=azjmSkyBJqM SUSE Labs Conference 2019 - Adding DAX support to btrfs
YouTube
SUSE Labs Conference 2019 - Adding DAX support to btrfs
None
btrfs is a CoW filesystem, adding DAX support for btrfs was met with its own set of challenges in the field of mmap and snapshotting, since DAX modifies data "in-place". This talk discusses about the findings and challenges with adding DAX support to…
btrfs is a CoW filesystem, adding DAX support for btrfs was met with its own set of challenges in the field of mmap and snapshotting, since DAX modifies data "in-place". This talk discusses about the findings and challenges with adding DAX support to…
https://lwn.net/Articles/838819/ XFS, stable kernels, and -rc releases
lwn.net
XFS, stable kernels, and -rc releases
Ever since the stable-update process was created, there have been questions
about which patches are suitable for inclusion in those updates; usually,
these discussions are driven by people who think that the criteria should
be more restrictive. A regression…
about which patches are suitable for inclusion in those updates; usually,
these discussions are driven by people who think that the criteria should
be more restrictive. A regression…
https://utcc.utoronto.ca/~cks/space/blog/linux/Ext3ToExt4Limitation 從 ext3 升級到 ext4 不會擴大 inode ,所以需要 ext4 inode 格式帶來的 feature 的話要重做 FS
https://www.youtube.com/watch?v=HgCvcMQnJQ0 FAST '15 - F2FS: A New File System for Flash Storage
YouTube
FAST '15 - F2FS: A New File System for Flash Storage
F2FS: A New File System for Flash Storage
Changman Lee, Dongho Sim, Joo-Young Hwang, and Sangyeun Cho, Samsung Electronics Co., Ltd.
F2FS is a Linux file system designed to perform well on modern flash storage devices. The file system builds on append-only…
Changman Lee, Dongho Sim, Joo-Young Hwang, and Sangyeun Cho, Samsung Electronics Co., Ltd.
F2FS is a Linux file system designed to perform well on modern flash storage devices. The file system builds on append-only…
https://www.youtube.com/watch?v=4g7e-acWbiE SDC2020: zonefs: Mapping POSIX File System Interface to Raw Zoned Block Device Accesses
YouTube
SDC2020: zonefs: Mapping POSIX File System Interface to Raw Zoned Block Device Accesses
The zonefs file system is a simple file system that exposes zones of a zoned block device (host-managed or host-aware SMR hard-disks and NVMe ZonedNamspace SSDs) as files, hiding from the application most zoned block device zone management and access constraints.…
https://github.com/openzfs/zfs/pull/11389 Set aside a metaslab for ZIL blocks
GitHub
Set aside a metaslab for ZIL blocks by ahrens · Pull Request #11389 · openzfs/zfs
Motivation and Context
Mixing ZIL and normal allocations has several problems:
The ZIL allocations are allocated, written to disk, and then a few
seconds later freed. This leaves behind holes ...
Mixing ZIL and normal allocations has several problems:
The ZIL allocations are allocated, written to disk, and then a few
seconds later freed. This leaves behind holes ...