My Little Corner of the Net

Am I an Avid Reader Yet?

At the beginning of 2024, I gave myself a goal to read more. As a kid, I loved to read, but somewhere along the line, life happened, and aside from an occasional book here and there, I hadn’t really been reading much for several years. I decided to change that.

I started off by making a list of books that I remembered having to read in high school and college. I thought it would be fun, or at least interesting, to read them again with a fresh set of eyes. Back then, my perspective was to read what I needed to know to be successful in class, not so much to enjoy the story. Some books I liked, some books I didn’t, but I wanted to give them a fresh look with my now more experienced eyes. What would I get from them twenty or more years later that I missed the first time? Would my expanded world view make me look at the stories differently than I did as a kid? Would I find some new meaning in a book that I hated as a teen?

It started on a Saturday afternoon with a digital copy of The Great Gatsby that I checked out from the New York Public Library’s app. To my surprise, I was finished with it by the end of the day.

I had no idea how many books I could realistically finish, so I set my goal relatively low: five books by the end of the year. After Gatsby, I went on to read several other American classics such as A Separate Peace, To Kill A Mockingbird, The Red Badge of Courage, and Tortilla Flat. By July I had reached my goal.

On Prime Day, I decided to buy a Kindle. Until then, I was mostly reading on an older Samsung Galaxy Tab tablet. It worked well, as long as I wasn’t trying to read outside in the sun. I figured the Kindle would be more versatile.

The Kindle came with a trial subscription to Kindle Unlimited. My initial impression of the service was that it didn’t have a lot of what I wanted to read. Still, I figured I’d find things that looked interesting for the three months that I had the trial and then I’d cancel. Of course, I still keep finding things I want to read, so I still have the subscription which I am now paying for (but I guess I’m using it enough to make it worthwhile).

By the end of 2024, I had finished 13 books, with another two in progress. I decided that a modest increase to 15 books would be a good goal for 2025, and I kept chugging away.

As of today, April 22, 2025, I have finished my 15th book of the year, Dr. Suess Goes to War: The World War II Editorial Cartoons of Theodore Suess Geisel, with still over eight moths left to go in the year.

I usually have two books going at once, generally one fiction and one nonfiction at any given time, and I try to dedicate about and hour a day to reading. Some days that doesn’t happen and I only end up reading a few pages, some days I get really into it and read for much longer. As of today, Amazon is reporting that I’ve read for 150 days in a row!

So what do you think? Am I qualified to call myself an avid reader yet?

Expanding the File System on a Le Potato with Raspberry Pi OS

I have a Libre Computer Le Potato single board computer that I bought during the pandemic when Raspberry Pi boards were pretty much impossible to get. It has turned out to be a great little machine that’s been so reliable that a year or so ago I decided to move some of my most important home lab services, including my smart home software stack (Home Assistant, Node-RED, etc.), to it.

Libre Computer offers a version of the Raspberry Pi OS Bullseye (which they still refer to as Raspbian) that has been customized for the Le Potato’s hardware on the boards’s downloads page. For consistency with my actual Pi devices, I’m using this image on my board, though I’ve manually upgraded it to Bookworm. For the most part, everything works exactly as it would on a Pi. There are a few small differences, such as the mount points of the filesystems, but nothing that you’d notice during normal, everyday operations.

Lately, the machine been acting a little wonky–web requests would, for example, take forever to load if something else was running at the same time. Upon investigation, I found that my 32Gb MicroSD card was nearly full. No problem—just get a bigger card, clone the current one onto it, and then use raspi-config to resize the partition. Easy peasy.

I got a new 128Gb MicroSD card and proceeded to do just that. I shut down the Le Potato, ejected the card, and then used Balena Etcher on my MacBook to copy the contents of the old card to the new one (there’s any number of ways I could have done this, including the Linux dd command, but Etcher has become my preferred way to prepare SD cards for my various little computers). When It was done, I put the new card in the Le Potato and started it back up. As expected, the machine booted and everything came up, but the drive was still full because copying the partition doesn’t resize it.

Next, I ran sudo raspi-config, went to “Advanced Options,” and chose “Expand Filesystem.” After going through the prompts and rebooting, I ran the df -h command to check the free space available on the machine’s disk and, to my surprise, it was still showing as full.

Filesystem     Size  Used Avail Use% Mounted on
/dev/mmcblk1p2  30G   28G    2G  97% /

Note, to be concise, I’m only showing the affected drive’s stats. When you run the command you’ll likely see a several lines for things like in-memory file systems that the operating system creates when it runs.

Thinking that the resize didn’t take for some reason, I ran sudo fdisk -l to list the partition table, and I got this:

Device         Boot  Start       End   Sectors   Size Id Type
/dev/mmcblk1p1        8192    532479    524288   256M  c W95 FAT32 (LBA)
/dev/mmcblk1p2      532480 249737215 249204736 118.8G 83 Linux

Clearly the partition has been resized, but the drive wasn’t seeing it. Why not? I was perplexed.

After a bit of pondering, I decided to check what type of file system the drive was using. To do this, I ran df -hT. This time, I got the following output:

Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/mmcblk1p2 btrfs      30G   28G    2G  97% /

Butter FS! All of my actual Pi’s are using the Ext4 file system, which will automatically use the full partition when it is expanded, but my Le Potato is using btrfs, the BTree File System. btrfs is a newer file system that has some nice features, including the ability to span multiple physical drives, data integrity features similar to RAID, and the ability to make snapshots of the drive’s state. I chose to use btrfs when I built my NAS because it gave me RAID-like redundancy across the initial two drives that I added, but would also let me use different size drives in the future, when I need to add more capacity, without losing any space the way I would with RAID.

Because of the way btrfs works, another step is needed:

btrfs filesystem resize max /

This tells btrfs to resize the file system mounted at / (the root files system) using all of the space available to it which, in this case, is the remaining portion of the partition created by raspi-config.

The fact that this can be done (and has to be done) on the running filesystem always seems weird to me as I’m used to having to unmount drives before doing any kind of maintenance on them, but because btrfs can span multiple disks, the file system has to be mounted.

While it technically isn’t necessary, I did one more reboot after resizing, just to make sure everything got updated:

sudo shutdown -r now

Once back up, I ran df -hT again and now I’m seeing this:

Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/mmcblk1p2 btrfs     119G   28G   91G  24% /

Much better. And more importantly, the machine appears to be stable once again.

I don’t know how popular btrfs is with other Raspberry Pi “clones,” but if you attempt to install a larger SD card and don’t get the results you’re expecting, be sure to check what file system is being used. There may be extra steps necessary.

Simple Image Galleries With Eleventy

As I continue converting my largest non-work site from Jekyll to Eleventy, I keep coming across things that I did in Jekyll that no longer work in Eleventy.  One of these is image galleries.

Jekyll and Eleventy have a fundamentally different approach to how they handle files.  Jekyll splits all of the files in the project folder into two types, based on whether or not they contain front matter.  Files with front matter are transformed and the result is saved to the site folder.  Files without front matter (which includes all images, PDFs, JavaScript, CSS, etc.) are simply copied to the site.  The latter, which Jekyll refers to as “static files” are placed into a static_files collection which can be accessed in templates.  I was able to use this collection to make simple photo galleries.

First, I created a gallery.html layout that looked something like this:

{{ content }}

<div class="gallery">
  {% for image in site.static_files %}
    {% if image.path contains page.gallerydir %}
      <div class="gallery-image"><a href="{{ image.path }}" class="gallery-link"><img src="{{ image.path }}"></a></div>
    {% endif %}
  {% endfor %}
</div>

Then, for each gallery page, I’d add a markdown file with a gallerydir variable in my front matter set to the path of the directory containing my gallery images:

---
title: Big Event Photos
layout: gallery
gallerypath: events/images/big-event
---

Check out some photos from our Big Event!

When the page is processed, the template code loops through the entire static_files collection, checks whether the path of each file falls within the gallerydir, and if so, links to it in the output.  I use a lightbox script (GLightbox, in this specific case) to allow the user to browse the images in a pleasing way.

Eleventy doesn’t have this concept of static files.  Eleventy only processes the types of files you tell it to look at and ignores everything else.  If you want Eleventy to copy static files, you have to tell it to do so by using eleventyConfig.passthroughFileCopy() or something similar.  While this will get the files into your site, they won’t be automatically added to any collections.

To build the list of gallery images, and keep them separate from the rest of the site’s images, I moved all of the gallery images into a “galleries” directory.  Within that I created subdirectories of images for each gallery.  Then I used the NodeJS package fast-glob to find those files.

First, fast-glob has to be installed:

npm install --save-dev fast-glob

Then, it needs to be imported within .eleventy.js:

const fastglob = require("fast-glob")

And then we call it from inside the module.exports routine within .eleventy.js to build a list of our gallery images:

const galleries = fastglob.sync(["**/galleries/*/*.*", "!_site"])

This sucks the paths of any file that has the parent directory structure of galleries/{some gallery name} into an array. (The second parameter, !_site, tells fast-glob to ignore any file paths that are already copied to the _site directory; flast-glob doesn’t understand the Eleventy file structure, so it doesn’t know to ignore _site.)  To actually use it, we need to create a new Eleventy collection. To do that, we also add this to module.exports:

eleventyConfig.addCollection("galleries", function (collection) {
    let items = galleries.map((x) => {
        let paths = x.split("/")
        return {
            gallery: parts[parts.length - 2],
            path: x,
            name: [parts.length -1]
        }
    })
    return items
})

This takes the list of image paths and turns it into a set of objects with three properties: gallery (the name of the gallery the image is within, pulled from the last directory name in the path), path (the original path of the image), and name (the filename of the image, which I’m not actually using right now, but I figured my be useful to know in the future).  This list of objects is used to populate a new “galleries” collection in Eleventy.

With this new collection, I can update my gallery layout to look more like this:

{{ content }}

<div class="gallery">
    {% assign images = collections.galleries | where: "gallery", gallery %}
    {% for images in images %}
        <div class="gallery-image"><a href="{{ image.path }}" class="gallery-link"><img src="{{ image.path }}"></a></div>
    {% endfor %}
</div>

And in my page’s front matter, I replace gallerypath with gallery, and assign it the name of the gallery (i.e. the directory name within one of my site’s “galleries” directories) I want to show:

---
title: Big Event Photos
layout: gallery
gallery: big-event
---

Check out some photos from our Big Event.

it’s important to note that fast-glob only returns a list of files that match the pattern, it does not copy them to the site automatically.  In my case, an existing passthroughFileCopy() for all JPEG images does the trick,  but we could also update the map function inside the addCollection() to handle this if we wanted.  As a future extension to this concept, I may look at using Eleventy’s Image plugin to automatically resize my images to ideal dimensions, but in my current use case, all of my images have already been manually resized.

So that’s how it did it.  This method gets me to feature parity on the Eleventy site, it still needs some work.  As it stands now, neither the Jekyll or the Eleventy solution is accessible.  I need a way to add additional information, like alt text for the images, to the galleries.  The obvious solution is probably to add a CSV file to the site’s _data directory to store this information, but then I could just loop through that instead of using the file glob, so maybe this whole approach isn’t really needed at all.  We shall see.

Recreating Jekyll’s _drafts Directory in Eleventy

I’m in the process of converting a couple of sites that I built a few years ago using Jekyll to use Eleventy instead. Both tools are static site generators that work very similarly, but Eleventy gives me more flexibility and, given that it’s based in JavaScript—a language I use daily—rather than Ruby—a language I know almost nothing about—Eleventy is much easier for me to extend and customize to my needs.

Jekyll has a unique feature that Eleventy does not: the drafts folder. In Jekyll, you can add content that isn’t ready for public consumption to a directory named _drafts , and when you build the site, this content will be ignored. To include the content, you add a --drafts argument to the jekyll build or jekyll serve command.

While I never really made much use of this feature, the site I’m converting now does have a drafts directory with a couple of in-progress pages. Eleventy doesn’t have the concept of draft content, so I wanted to find a workaround, at least for the time being.

Eleventy has a few ways to ignore files from being processed. First, anything inside node_modules is automatically ignored. Then, anything in .gitignore (at least by default) or .eleventyignore files gets excluded, but adding the _drafts directory to one of these would mean it would never be processed. I need a way to selectively tell Eleventy to build the draft content when I want it and ignore it when I don’t.

Fortunately, there is a simple solution: Eleventy’s ignores collection, which is automatically populated from the files above. Eleventy conveniently provides an API for adding or removing paths from the collection on a programatic basis. To make the drafts folder work, I added the following inside the module.exports in my site’s .eleventy.js file:

if(process.env.ELEVENTY_ENV === "production") {
    eleventyConfig.ignores.add("_drafts")
}

This looks for an environment variable named ELEVENTY_ENV with the value production and, if found, adds the file glob “_drafts” to the list of ignored content. This has the effect of ignoring anything located in any directory named _drafts located anywhere within the site when that environment value is present. If the ELEVENTY_ENV variable is not set, or it contains a different value, the draft content will be processed.

I’m already using ELEVENTY_ENV to manage minification and map file creation for CSS and client-side JavaScript assets, so this works well for me. In fact, I don’t really have to think much about it because I’ve incorporated it into the npm run scripts in my `package.json`:

"scripts": {
"build": "npm run clean; ELEVENTY_ENV=production npx @11ty/eleventy",
"build-test": "npx @11ty/eleventy",
"watch": "npx @11ty/eleventy --watch",
"serve": "npx @11ty/eleventy --serve",
"clean": "rm -rf _site/*"
}

This means that drafts will be excluded if I nmp run build the site, but not during development when I’m most likely using npm run serve. If I want to exclude that content during development for some reason, it’s just a matter of running ELEVENTY_ENV=production npm run serve.

Dynamic Autocomplete with AlpineJS and (Almost) No Code

I’m in the process of adding some new features to a web application I created several years ago.  It’s an app that makes it easy for a handful of non-technical users to manage users and groups in a third-party system.  It’s a multipage app that doesn’t use a lot of JavaScript, but where it does it uses jQuery (don’t judge, we were all using jQuery when this thing was written).

I don’t have time to completely refactor the entire app, but I’d like to start the process of moving away from jQuery, so I figured I’d avoid using it for the new functionality.  AlpineJS is one of my favorite JavaScript libraries right now, and I figured it would be the perfect tool to use for this project, since it would give me modern, reactive-style support while still working within the confines of the existing multipage framework.  Alpine can do most of what I need pretty easily–things like modals and input checking.  One of the things that I didn’t have a good answer for, however, was the autocomplete.

In the current version of the application, adding users is done by selecting a group and entering a username into a form field.  Of course we don’t expect that the application’s users will necessarily know the usernames of the people they’re adding, so I added a jQueryUI autocomplete which is tied to a script that does an LDAP search and returns a list of names.  As the user types in the field, a list of possible people pops up and, when one is selected, the proper username is entered into the field and the form can be submitted.

The new functionality that I am adding also needs a user lookup.  Of course, there are lots of “vanilla” autocompletes out there that I could use, but ideally I’d like to limit the number of extra libraries I need to include.  I’ve also been working on another project lately that involves form processing with JavaScript and, at some point when I was looking at the MDN site for something, I was reminded of the HTML 5 <datalist> element.

If you aren’t aware, <datalist> lets you create a static list of options, similar to a <select> element, that can be attached to an <input>.  Unlike a <select>, however, the <datalist> list is only a list of suggestions; values that are not in the list can still be entered.

A <datalist> looks a lot like a <select>:

<datalist id="animals">
  <option>Dog</option>
  <option>Cat</option>
  <option>Mouse</option>
</datalist>

It can also accept a key-value list, just like a <select>.  The only difference is that when an item is selected, the value shown in the field will be that of the value attribute rather than the label text.

<datalist id="animals">
  <option value="dog">Dog</option>
  <option value="cat">Cat</option>
  <option value="mouse">Mouse</option>
</datalist>

A <datalist> is tied to an <input> element by adding a list attribute to the <input> element. The list attribute should be set to the id of the <datalist> that’s to be used:

<input type="text" name="animal" list="animals" />

Normally the <datalist> would contain static values that are included in the page when it is rendered on the server, but this wouldn’t work for my use case…it would be impractical to include all 20,000 or so user accounts that we have on every page load.  Instead I need to build the list dynamically.  This is where AlpineJS comes in.

First, I need a datasource.  As I mentioned above, the app already has an endpoint, “/lookup” that is used by jQueryUI.  This takes a query string parameter “term” and returns a JSON array that looks similar to this:

[
  {
    "value": "jsmith",
    "label": "John Smith (jsmith, Student)",
  },
  {
    "value": "sjones",
    "label": "Susan Jones (sjones, Faculty)",
  },
]

Next, I need AlpineJS.  In this case I’m using both the AlpineJS base library as well as Alpine Fetch, a third-party plugin that helps with fetching remote data into Alpine.  As with all Alpine plugins, Alpine Fetch needs to be included before the base library.

<script defer src="https://gitcdn.link/cdn/hankhank10/alpine-fetch/main/alpine-fetch.js"></script>
<script src="//unpkg.com/alpinejs" defer></script>

Now I can create the AlpineJS component.  That looks like this:

<div x-data="{
    results: null,
    term: null
}">
  <label for="username">Username:</label>
  <input type="text" id="username" name="username" list="userlist" 
    x-model="term"
    @keypress.throttle="results = await $fetch('/lookup?term=' + term)"
  />
  <datalist id="userlist"> 
    <template x-for="item in JSON.parse(results)">
      <option :value="item.value" x-text="item.label"></option>
    </template>
  </datalist>
</div>

That’s it!  We now have a fully working autocomplete.

Let me break down what’s going on here:

  1. In the first line, the x-data attribute on the <div> signals to Alpine that we are creating a new component.  The value of that attribute contains the default values for variables that we’ll be using within the component.
    • results contains the results that were received the last time the /lookup endpoint was queried.  It will contain a JSON string.  We don’t have any initial data, so it is initialized to null.
    • term will contain the term that is being searched.  It will be linked to the value of the input element, but since we aren’t starting with an initial value, it is also set to null.
  2. The <input> element on line 6 is where most of the interaction occurs:
    • We link the field to the <datalist> using the list="userlist" attribute.
    • The x-model="term” attribute establishes a two-way linkage between the field’s value and the term variable we initialized in x-data.  This means that any time the field value is changed, the variable will be updated to reflect it.  Likewise, if the variable is ever changed directly (which never happens in this context), the field value will also change to reflect it.
    • Finally, the @keypress attribute sets an event handler that calls /lookup with the current value of the field each time a key is pressed.  The .throttle modifier is used to limit these calls to no more than once every 250 ms to prevent flooding the server.  $fetch() is a magic method provided by Alpine Fetch that makes a web request and returns the result body as a string, which we store in the results variable we created in x-data.
  3. Alpine watches for changes to variables and reacts to them, so once we get new results, the x-for loop in the <template> on line 11 gets triggered.  This creates new <option> tags within the <datalist> for each result in the returned JSON data, replacing any that were there previously.  Since results contains the raw string that was returned from the web request, we call JSON.parse() on it to parse it into a JavaScript array.
  4. On each <option> tag that’s created, the :value="item.value" attribute tells Alpine to set a value attribute with the value from the result item and the x-text="item.label" tells it to set the element’s innerText to the value of the result item’s label.

So far this approach seems to work great.  The only downside is that each browser has its own way to format the <datalist> display, and there’s no way to customize it with CSS.  That’s not a big deal to me in an app that only has a handful of users, but it might be if it’s used on a large, public-facing, well-branded site.  If that’s the case, it probably wouldn’t be too difficult to modify this approach to use, say, an absolute-positioned <ul> list, the way more traditional autocomplete utilities do things, though that would require a couple of additional event handlers, some ARIA tags to ensure accessibility, and a bunch of CSS.

<