There’s been some Friday night kernel drama on the Linux kernel mailing list… Linus Torvalds has expressed regrets for merging the Bcachefs file-system and an ensuing back-and-forth between the file-system maintainer.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    4
    ·
    3 months ago

    I know, that was an example of why it doesn’t work on ZFS. That would be the closest you can get with regular ZFS, and as we both pointed out, it makes no sense, it doesn’t work. The L2ARC is a cache, you can’t store files in it.

    The whole point of bcachefs is tiering. You can give it a 4TB NVMe, a 4TB SATA SSD and a 8 GB HDD and get almost the whole 16 TB of usable space in one big filesystem. It’ll shuffle the files around for you to keep the hot data set on the fastest drive. You can pin the data to the storage medium that matches the performance needs of the workload. The roadmap claims they want to analyze usage pattern and automatically store the files on the slowest drive that doesn’t bottleneck the workload. The point is, unlike regular bcache or the ZFS ARC, it’s not just a cache, it’s also storage space available to the user.

    You wouldn’t copy the game to another drive yourself directly. You’d request the filesystem to promote it to the fast drive. It’s all the same filesystem, completely transparent.

      • apt_install_coffee
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        3 months ago

        Brand new anything will not show up with amazing performance, because the primary focus is correctness and features secondary.

        Premature optimisation could kill a project’s maintainability; wait a few years. Even then, despite Ken’s optimism I’m not certain we’ll see performance beating a good non-cow filesystem; XFS and EXT4 have been eeking out performance for many years.

          • apt_install_coffee
            link
            fedilink
            arrow-up
            1
            ·
            3 months ago

            A rather overly simplistic view of filesystem design.

            More complex data structures are harder to optimise for pretty much all operations, but I’d suggest the overwhelmingly most important metric for performance is development time.

            • ryannathans@aussie.zone
              link
              fedilink
              arrow-up
              1
              ·
              3 months ago

              At the end of the day the performance of a performance oriented filesystem matters. Without performance, it’s just complexity