I like our make-planet-techy subreddit. Seems like articles that get rated as one would expect. I happily switched to reddit as my primary way to consume planet mozilla. Redditors, keep up the good work!
Our new telemetry dashboard went live yesterday. It’s missing features, data, UX. However it is public, fast and hackable. It should evolve quickly.
Mark Reid is our first server-side dev. His primary task is switching telemetry backend from hadoop to a custom telemetry server. New server infrastructure will enable a live dashboard (3-day delay atm) + ability to run queries in minutes rather than in hours.
Dhaval Giani, “the intern”, is our first kernel hacker. He’s working on helping land volatile memory in the kernel. This feature will let Firefox safely consume more memory when memory is plentiful and use less in limited scenarios. Hopefully he’ll also add read-only file compression to ext4.
Blogs linked above are in the please-add-to-planet queue. Expect them to spend a few weeks there. Please subscribe to their RSS feeds in meantime.
Investigating SQLite Performance via Telemetry
Years ago I tweaked our SQLite clients to use a larger page size. In my testing this seemed to achieve a speedup of 0.2-2x, see bug 416330. Unfortunately, ~30% of our users are still on the old page size (bug 634374). We used a jydoop query to figure out if it’s worth developing a feature to convert these users over using two different (time spent reading & time spent executing a query) ways to measure performance differences. According to both methods there is a 4x reduction in sqlite IO waits with the larger page size, so we’ll be adding code to convert people over more aggressively.
This was a cool investigation because it highlighted how much more confidence we have when making performance decisions now (due to telemetry) vs a few years ago.