Cloudflare Pages 😍 I got into Cloudflare pages because it lets you host static websites with all the latest optimizations conveniently and for free. Setting up a static website with a custom domain is easy, and you get the benefits of automatic SSL, CDN, and latest version of HTTP3.
This is how this blog is hosted and I have a few other projects like this. I love the fact that I can get a super-fast static website up and running in a few minutes and I don’t have to worry about infrastructure details.
TLDR: ZFS free-space reporting is a lagging indicator.
Background I use Proxmox VM server backed by a ZFS array of hard drives for various personal infrastructure. I also have security cameras that upload motion-triggered videos to my server (via FTP!).
Problem Description I would like to use 90% of my available space for most-recent security videos.
Recipe:
Create a dedicated ZFS volume Setup ZFS quota Run a cronjob to free space faster than it gets consumed by video uploads.
Previous blog post on how to trace Firefox IO using bpftrace via official page_fault_user tracepoint left me a bit unsatisfied with how complicated it turned out. Complexity has potential to be error-prone and the syscall-tracing dependency makes it impossible to trace IO within the main executable.
I decided to try reimplement the trace using my old approach of tracing ext4 functions that handle page-faults. This turned out to be much more robust.
Modern browsers are some of the most complicated programs ever written. For example, the main Firefox library on my system is over 130Mbytes. Doing 130MB of IO poorly can be quite a performance hit, even with SSDs! :).
Few people seem to understand how memory-mapped IO works. There are no pre-canned tools to observe it on Linux, thus even even fewer know how to observe it. Years ago, when I was working on Firefox startup performance, I discovered that libraries were loaded backwards (blog1 , blog2 , paper , GCC bug ) on Linux.
My current employer does a lot of really cool systems work that’s covered by NDAs. I recently did some work to integrate a cool open source tool into our workflow. Felt it deserved a blog post.
NFS Testing Requires Parallelism. I work for Pure Storage . One of the products we make is a scale-out NFS1 (and S3-compatible) server called FlashBlade .
I was asked to test FlashBlade2 performance scaling. I needed to generate NFS read workloads of 15-300 Gigabytes/second.
This post is about minimizing amount of disk IO and CPU overhead when reading Zip files.
I recently saw an article about a new format that was faster than zip.
This is quite surprising as to my mind, zip is one of the most flexible and low-overhead formats I’ve encountered.
Some googling showed me that over past 11 years people have noticed that Firefox uses optimized zip files. This inspired me to document thinking behind the optimized zip format I implemented in Firefox in the pre-pandemic 2010.