Indie Support Weeks: Monodraw

I know that #IndieSupportWeeks were supposedly a thing that ended in early 2020, but I don’t see why we shouldn’t continue shouting-out to the devs of apps we use everyday.

Late in 2020, @Splattack on the Zettelkasten Forum brought up Monodraw – think OmniGraffle, but with ASCII box art!

It produces beauties like this:

 │ NSTableView │────────────────────┐
 └─────────────┘                    │
        │                           │
        │                           │
        ▼                           ▼
┌───────────────┐          ┌─────────────────┐
│ NSTextStorage │◀─────────│ NSLayoutManager │
└───────────────┘          └─────────────────┘

If you can’t like to look at that, then I’m afraid we won’t be able to become friends :)

Now why should anyone care for diagrams using box drawing characters? It’s super useful in a plain text note-taking app environment to create semi-visual diagrams of hierarchies with clickable links!

Take this, for example:

│  [[201708031447]]  │
│ Role of NSTextView │
           │ controller for
 │ [[201708031442]] │
 │     Role of      │
 │ NSLayoutManager  │
           │ delegates to
 │ [[201708031445]]  │
 │ Typesetter drives │
 │      layout       │

Inside the boxes are double-bracketed wiki links that point to other notes in my Zettelkasten note archive. I can click on these if I view the note in The Archive!

The same box drawing diagram inside of The Archive, which makes wiki links clickable

It’s still a hack, but it’s a hack that makes it possible to create visual diagrams of things in your personal knowledge database. I think this is super rad. And now I have to find a way to make The Archive not use >1.0x line heights to avoid the ugly gaps in the box drawings.

I don’t use Monodraw for simple relationship diagrams only, though. I use it’s beautiful capabilities to make mockups of user interfaces I can share in plain text documents, too.

Mockup of an app window with annotation in Monodraw

The crazy part is that Monodraw is a one-time purchase at 8.99€, so it’s a steal if you like box drawing.

Check out the trial version to see how the interactions work: like all good diagram applications, lines attach to boxes by default and you can move the connections around when you move the connected boxes. But with Unicode box drawing characters. Sweet!

Really, go check out Monodraw!

Using XCTUnwrap Instead of Guard in Unit Tests

In 2015, I sang praises for the guard statement to unwrap optional values in tests. But since then, we were blessed with XCUnwrap, which makes things even better in my book. I received a comment on the 2015 piece a couple weeks ago, so I figured I might as well document that things have changed in the meantime!

The original post had this code:

func testAssigningGroup_AddsInverseRelation() {

    guard let file = soleFile() else {
        XCTFail("no file")

    guard let group = soleGroup() else {
        XCTFail("no group")

    // Precondition
    XCTAssertEqual(group.files.count, 0) = group

    // Postcondition
    XCTAssertEqual(group.files.count, 1)
    XCTAssert(group.files.anyObject() === file)

With the new-ish XCTest helpers, we can write it like this:

func testAssigningGroup_AddsInverseRelation() throws {

    let file = try XCTUnwrap(soleFile())
    let group = try XCTUnwrap(soleGroup())

    // Precondition
    XCTAssertEqual(group.files.count, 0) = group

    // Postcondition
    XCTAssertEqual(group.files.count, 1)
    XCTAssert(group.files.anyObject() === file)

Back in 2015, tests couldn’t even be marked with throws. So this is a nice way to exit a test case early when a condition isn’t been met.

FloatingFilter Gets Rounded Corners

I’ve updated the open source FloatingFilter package to finally have rounded corners. The old corners looked especially out of place on Big Sur, but fit Catalina and older macOS versions, too.

I believe I never announced the development of this library in the past. It’s used in The Archive to show a floating selector for external editors, and will also be a prominent feature of stuff I’m working on. It’s a general purpose “select with fuzzy matching from a list of things” tool.

Check out FloatingFilter on GitHub

Adding a Wiki to the Site

Some things on the blog are supposed to have a longer shelf-life. But the nature of a blog is to present things in a timeline.

I employ cross-links from older posts to newer posts to encourage exploration with the introduction of the “linked posts” part at the bottom of each post. And I have a structured overview to help with discovery. Even then I branched out into other topical pages, like the Cocoa Text System overview, or the even larger FastSpring/Distribute outside the MAS page. To make sense of the timeline, I introduce what’s basically a ‘garden’ to my ‘stream’. It’s not a new idea, but I find not having these overview pages to hamper my writing. Some things need systematic overviews, and I enjoy making these, but there’s no good place for them.

After Denis Defreyne, maker of the static site generator nanoc I use, mentioned in passing that he’s experimenting with wiki links in his own project to adopt the Zettelkasten style, I wanted to look into this again after failed attempts in the past.

To make this addition work, [[wiki links]] are very helpful. A regex applied to the text doesn’t cut it, though, otherwise the verbatim <code> tag just now would be treated as a link, too. Extending the Markdown parser for inline elements is way better. Denis pointed out that the kramdown Ruby Markdown library supports syntax extensions. I was able to whip up a first attempt to inject wiki link detection as inline elements and maintain compatibility with pipes used for table block elements. I published this as a Gist, the code is:

require 'kramdown/parser/kramdown'
require 'kramdown-parser-gfm'

# Based on the API doc comment:
class Kramdown::Parser::GFMWikiLink < Kramdown::Parser::GFM
  def initialize(source, options)

    # Override existing Table parser to use our own start Regex which adds a check for wikilinks
    @@parsers.delete(:table) #Data(:table, TABLE_START, nil, "parse_table")
    self.class.define_parser(:table, TABLE_START)


  # Override Kramdown table pipe check so we can write `[[pagename|Anchor Text]]`.
  # Regex test suite:
  TABLE_PIPE_CHECK = /^(?:\|(?!\[\[)|[^\[]*?(?!\[\[)[^\[]*?\||.*?(?:\[\[[^\]]+\]\]).*?\|)/.freeze  # Fail for wikilinks in same line
  TABLE_LINE = /#{TABLE_PIPE_CHECK}.*?\n/.freeze  # Unchanged
  TABLE_START = /^#{OPT_SPACE}(?=\S)#{TABLE_LINE}/.freeze  # Unchanged

  WIKILINKS_MATCH = /\[\[(.*?)\]\]/.freeze
  define_parser(:wikilinks, WIKILINKS_MATCH, '\[\[')

  def parse_wikilinks
    line_number = @src.current_line_number

    # Advance parser position
    @src.pos += @src.matched_size

    wikilink = Wikilink.parse(@src[1])
    el =, nil, {'href' => wikilink.url, 'title' => wikilink.title}, location: line_number)
    add_text(wikilink.title, el)
    @tree.children << el


  # [[page_name|Optional title]]
  # For a converter that uses the available pages, see: <>
  class Wikilink
    def self.parse(text)
      name, title = text.split('|', 2)
      title = name if title.nil?, title)

    attr_accessor :name, :title
    attr_reader :match

    def initialize(name, title)
      @name = name.strip.gsub(/ +/, '-')
      @title = title

    def title
      @title || @name

    def url

The original regex to see if a line of text denotes a table needed to be replaced, though. I patched this in by overriding the table detection, but keeping the table handling as-is.

All the lookaheads and lookbehinds in the TABLE_PIPE_CHECK make compilation of my page even slower, so I limit the GFMWikiParser to the /wiki/**/* route in nanoc’s compilation rules.

It’s functional already, but the result doesn’t have very many things to look at just yet. I’ll add the new wiki to the navigation once I ported old wiki stuff I had abandoned over.

Funding Open Source Software as a Third Party?

In a Discord chat, we’ve recently talked about how well funding for Blender turned out. At the time of writing, they get $137k per month for development. I cannot say if that’s enough or too little. But it’s not nothing.

Being crowd-funded comes with its perils. Especially with free open-source software like Blender, developers tell that it’s not easy to know which user base to focus on, which UI/UX compromises to make, and how to figure out if the project backers are satisfied with the result.

It’s not trivial when you sell apps, either, but there you at least do know which people to listen to if in doubt: your loyal customers.

We wondered how hard it’d be to set up a successful funding pool for other projects, like Emacs. There’s no clear goal, but there’s a ton of bugs to fix and things to maintain, and a couple extra dollars don’t hurt to make it possible to put in more work and at the same time not starve.

Management of expectations, direction of funds, singling out of developers to support – all that would be extra effort. While there’s a FSF fund, there’s no fund that directs your money towards Emacs development. It could go anywhere, split in many ways. So even the money transfer, which is the easiest part, is not simple in this case. It doesn’t suffice to collect money in a pool and then forward the funds to the FSF. (This set-up is also the prime motivation to even talk about crowd-funding Emacs development: because you can’t already do it properly.)

Now making the money available to the software maintainers requires a platform to manage these people. That’s an extra step that sucks. As if raising funds wouldn’t be hard enough already.

Managing people is not a trivial task, either. What if some Emacs user has enough of the daily shortcomings of the editor, whips up a couple of work-in-progress improvements, but is totally new to the scene? How do you compensate these kinds of efforts? They’re not part of the ‘accepted developer pool’, so they won’t get a share of the funds. Do you even want to compensate contributors that way? Would it turn the FOSS spirit on its head and shift the focus from “how can I make this project better” to “how can I make money from this”?

In the end, I’d imagine that managing expectations of backers would be the easiest part, combined with setting up a way to collect and then forward the money. – Legal implications might prove me wrong quickly. Maybe you need a pro-bono organization as a legal entity to manage and forwards funds correctly. More overhead. But customer expectations could be managed by making clear from the get-go that the pool of funds is not a custom order form for your favorite Emacs feature.

Here’s what I imagine: Backers can leave a vote and tell devs what they are most interested in. But this is not a promise by the devs. It’s a mutual “feeling out”. A telling of needs and wants. It’s by no means intended to bind devs (legally, morally, or otherwise) to implement that stuff. In the spirit of patronage, devs shall receive money and do good deeds and improve the software, but not be forced to fulfill someone else’s wishes. Might even be worth keeping the developers secret to avoid their getting bugged by angry backers.

Is the best case scenario then that this fund attracts hundreds of thousands of dollars per month? Would this create a political problem because now there’s a well-funded “power” that wants to have a say in the development of Emacs, and would there be conflict with the vision of the FSF, and veteran pro-bono Emacs maintainers? I’d be surprised if there would not be any trouble.

So funding open source projects like Emacs as a 3rd party is not an easy task. It’s simple to set up, but it’s by no means a smooth ride to make it work.

The best thing that could happen is if the FSF offered a way to directly fund Emacs development. But that only means the FSF would have to figure out how to split the funds among developers. And their problem would be that there already are a lot of active developers known to the FSF, so this would have to be figured out from the start for everyone involved, lest they create a caste system of paid and unpaid devs. It might be easier when there’s “a lot” of monthly funds to share. But if you only have a single slice of pizza, how many more ways can you cut it before you might as well not bother trying?

The Archive 3rd Anniversary Giveaway

The confetti looks much nicer in 60 FPS on the real page. Mhmmm confetti...

Our note-taking app for macOS, The Archive, turns 3 years old this week. To celebrate, we wanted to do something nice to all the people who supported us during that time.

The thing we came up with is a giveaway. But every customer of the past 3 years wins automatically. The price is 1 free license to freely gift to someone else.

The only condition is that you purchased the app before I uploaded everything today, and that’s it. And that you enter before April rolls around, because then we’ll take down the Claim-O-Matic.

Go to the give-away page and enter your email, then you get a link you can share with the ultimate recipient!

It was really fun to create the website for this giveaway. I also find myself staring at the confetti because it’s so soothing to see how it travels downward. I hope you enjoy the experience, too, and that you know someone who would benefit from a tasteful note-taking app in 2021 with no strings attached.

The Beauty of Hacking Swift: Make Union of Set Algebra Types More Obvious

I found it weird to form the union of two CharacterSet instances by calling the union method on one element:


This chains nicely, but what pops out to me looking at the line of code is the CharacterSet, then something about URLs, and then I scan the line to see what kind of statement this is – some lengthy stuff that looks like chained method calls at first glance due to the length and the many dots.

Instead, I prefer a static method/function:

extension SetAlgebra {
    static func union(_ sets: Self...) -> Self {
        guard let first = sets.first else { return self.init() }
        return sets.dropFirst().reduce(first) { $0.union($1) }

That way, I can write:

CharacterSet.union(.urlHostAllowed, .urlPathAllowed)

… and thus I can highlight the union operator in my code instead of bringing attention to the first set in the union. This gets even better when you add more sets into the union.

This is no segue to hate OOP and method calls. I think the traditional OOP style works fine for a lot of cases. But functions work better for some operator tasks. (I’m not ready to make this a generic free function just yet, but think it’s a sensible next step.)

This kind of customization is by no means ground-breaking. But it’s one of the things we can do in Swift that make working with Swift enjoyable. It’s the ‘hacking’ with the language. I don’t have to cook my own unionizing type, I can just amend the existing code a bit to suit my, well, tastes. Because functionally, the code is equivalent. And the reducer is probably harder to read if you don’t know how reducers work (hello past-me from 5 years ago!). But the surface API is much nicer and doesn’t require any arcane skills.

With recent events on my computer, this reminds me a lot of the hackability of Emacs Lisp and thus the editor I’m using to write this while it’s running, and of course Smalltalk and its IDE, and Ruby. It works differently, but it’s a similar kind of joy of hacking.

MacSymbolicator: Tool to Symbolicate Your Crash Reports

I am not good at reading crash logs of my apps. Some errors are obvious, like index out of bounds exceptions. Others require actual symbolication of the crash log to reveal the symbol aka function name in the stack trace. You can do this in the command line and interactively explore the crash reasons like a caveperson.

Or you use a converter app.

I didn’t have any for quite a while, but was frustrated enough again today to search for an app.

MacSymbolicator 2.0 makes symbolication of crash reports super easy.

Enter MacSymbolicator 2.0!

It’s a window with two drag targets: one for the crash, and the other for the dSYM to resolve symbols from memory addresses.

The app even tries to find the dSYM for you. In my case, the crash log was for a previous app version but MacSymbolicator pulled up the dSYM for a newer version. Easy to fix, though:

  • head to your .xcarchive by revealing it in Finder from Xcode’s “Organizer” window,
  • show its package contents,
  • go to the dSYMs/ subdirectory,
  • drag the app’s .dsym file into the right-hand pane.

It’s open source and just works. Lovely. Big shoutout to Mahdi Bchatnia, creator of this tool!

Visit his website for a binary download of the app, or GitHub for the source.

Refactoring Case Change Code to Idiomatic Emacs Lisp

I asked Xah Lee for feedback on my case changing functions. He’s fluent in Emacs Lisp, so I figured if he wanted to, he would’ve used my approach years ago. So there must be something I miss.

The factoring of the small helper functions don’t seem to be bad, but there are other reasons to design the text editor you use every day in one way or another:

Ergonomics: Limit modifier key usage

“[C]onsider the fact there are 100 often used commands but only say 50 good key spots” – with the default Emacs key bindings re-bound to my functions, I need 3 keys and hold a modifier to access them (M-c, M-u, M-d).

Xah bound this oft used function to b to change the case (he’s using modal input, like vim, so it’s really just that key, no modifiers involved). b b b is the worst it gets: hit an easy to reach key three times in a row to cycle through all options, and a single b is the best case.

Holding modifier keys while typing a letter is commonplace, but not as good. Put what you use most often at your fingertips.

This reminds me a bit about the appeal of programmable keyboards with multiple layers. Some folk swear by the ability to enter a specific context and make the keys mean something different while they stay there. Modal key maps in editors aren’t much different from a pragmatic point of view.

Could it pay off to enable optional “editing mode” key bindings?

Weird Lisp

To recap, here’s the core function I introduced that would enable Elisp programmers to act on words:

(defun ct/word-boundary-at-point-or-region (&optional callback)
  (let ((deactivate-mark nil)
        $p1 $p2)
    (if (use-region-p)
        (setq $p1 (region-beginning)
              $p2 (region-end))
        (skip-chars-backward "[:alpha:]")
        (setq $p1 (point))
        (skip-chars-forward "[:alpha:]")
        (setq $p2 (point))))
    (when callback
      (funcall callback $p1 $p2))
    (list $p1 $p2)))

Xah’s main criticism is that this command’s name is bad and that it introduces complex patterns.

Passing functions apparently isn’t that common in Emacs Lisp. Of course it’s doable. “Function pointers”, let’s call these, are used to wire keyboard shortcuts all the time. It seems that people prefer to write simple functions with input and output.

I really, really like that the function above takes an optional callback and passes the result to it, too, if present, because then you can decorate functions easily. Since I discovered “East-Oriented Programming”, I was hooked by the concept. Really helped me solidify object-oriented programming a bit more.

But Lisp isn’t an OO language.

Choice of language also ties into the other criticism: naming the function. It felt odd, and sure enough, at least 1 other person I respect for their experience agrees.

Here’s a call-site:

(defun ct/capitalize-word-at-point ()
  (ct/word-boundary-at-point-or-region #'upcase-initials-region))

The offending line is (ct/word-boundary-at-point-or-region #'upcase-initials-region)).

In Ruby, for example, I wouldn’t mind a block for this, because it’d include do...end in the syntax, so while it’s still not ideal, it still reads like “act on this”:

word_boundary_at_point_or_region do |region|

That’s not how I’d write idiomatic Ruby, but a mostly literal translation still works better.

Same in Swift or Objective-C, where I learned to value named parameters, adding something non-descript like “handle” helps:

wordBoundaryAtPointOrRegion(handle: { region in

Emacs Lisp doesn’t seem to favor function composition operators, like piping or applying, so we don’t get to express it like this: (ct/word-boundary-at-point-or-region |> upcase-initials-region).

Idiomatic back-and-forth Lisp seems to favor this classic approach of the returned value instead:

(defun ct/capitalize-word-at-point ()
  (let* (($bounds (ct/word-boundary-at-point-or-region))
         ($p1 (car $bounds))
         ($p2 (cadr $bounds)))
    (upcase-initials-region $p1 $p2)))

Simplifying, the fundamental element of Emacs Lisp is the list, not the function. With Ruby, it’s objects all the way down, so you design code differently.

I remember I asked Avdi Grimm one day about code he published, and how he’d approach using OOP not in a web app but a long-running native application. His anwer stuck with me, it was: “First, I’d consider the language I’m using.” – And with it, the environment and standard library etc, but first, the language.

I can pass functions around in Emacs Lisp just fine. Should I do it? Maybe not always.

It’s not like repeating the 3 lines of unpacking the bounds into two points is a huge pain. I can repeat that part. Extracting that repeated code into a “apply 2 items from the list as parameters to a function” helper doesn’t make much sense, yet that’s basically what I did here.

Since I don’t like the name I ended up with, there’s splitting it into 2 variants like (ct/word-boundary-at-point-or-region) to return the values, and (ct/apply-word-boundary-at-point-or-region) for the callback forwarding. Also weird. I’ll try to roll with the simpler, albeit longer code.

Change Case of Word at Point in Emacs, But for Real This Time

At the moment, I’m proof-reading and editing the book manuscript of my pal Sascha for the new edition of the Zettelkasten Method book. As with most things text these days, I’m doing that with Emacs.

Something that continually drives me bonkers is how Emacs handles upcasing, downcasing, and capitalization of words by default. The functions that are called for the default key bindings are upcase-word, downcase-word, and capitalize-word. These sounds super useful to fix typos. The default behavior is odd, though: They only change the case of the whole word when you select the word first. Otherwise they change the case of the remainder of the word beginning at the character at the insertion point. The docstrings say as much: “Capitalize from point to the end of word, moving over.” Why?

Anything before the insertion point is ignored; and when capitalizing, only 1 character is actually changed.

So the functions are aware of my intention to change the word. Why don’t they start at its beginning?

I can understand the underlying functions involved here that act on the region aka selection of the user. They usually expect 2 parameters, the start and end of the region where the effect should be applied. That’s super useful to compose effects with other functions because of its general nature.

The “convenient” behavior of the key-bound functions to change the case for the remainder of the word puzzles me, though. Is it because you can pass numerical parameters to it to continue from point onward N words forward or backward? I dont’ know. Even then, why not start at the beginning while we’re at it? I would understand not acting on words at all without a selection, and just changing the case of character at the insertion point’s location then. But this?!

Xah Lee, who seemingly has done every conceivable thing you can do to Emacs in the past 20 years, implemented his own ‘toggle letter case’ function that does what maybe not every Emacs Lisp programmer, but any writer would expect: to act on the whole word.

He opted to figure out word boundaries via the [:alpha:] regular expression. That’s maybe not always enough, but it’s good enough for typing text. And, unless “thing at point”, it is consistent. Every mode can redefine what a “word” means in its context. (Which is useful on its own, but not helping to keep downcasing predictable.)

I changed Xah’s code a bit, because I don’t want to cycle through cases interactively. I’d rather hit M-u for ALL CAPS upcasing once.

Imagine a helper function ct/word-boundary-at-point-or-region that returns 2 character locations: the start and end of either the current region (i.e. selection in emacs) or the word below the insertion point.

The return value could be (100 110) for a 10 character word that starts at offset 100. The position of the insertion point notably doesn’t matter.

You can upcase a word like this, using car to get the first element of the returned list value, and cadr (aka (car (cdr x)))) to get the last element.1

Here’s a function that would utilize this to fetch both points and then capitalize the region:

(defun ct/capitalize-word-at-point ()
  (let* (($bounds (ct/word-boundary-at-point-or-region))
         ($p1 (car $bounds))
         ($p2 (cadr $bounds)))
    (upcase-initials-region $p1 $p2)))

I’d have to copy and paste that for all three case-changing functions I need. I’d rather extract the common theme here and change the approach to an adapter of sorts, if you pardon the OOP terminology when using Lisp.

The actual implementation of ct/word-boundary-at-point-or-region thus is implemented in a way to figure out the start and end points of a word, return these, but then also forward these to a callback, if that is provided.

Here’s the implementation, mostly copies from Xah’s excellent code:

(defun ct/word-boundary-at-point-or-region (&optional callback)
  "Return the boundary (beginning and end) of the word at point, or region, if any.
  Forwards the points to CALLBACK as (CALLBACK p1 p2), if present.

  (let ((deactivate-mark nil)
        $p1 $p2)
    (if (use-region-p)
        (setq $p1 (region-beginning)
              $p2 (region-end))
        (skip-chars-backward "[:alpha:]")
        (setq $p1 (point))
        (skip-chars-forward "[:alpha:]")
        (setq $p2 (point))))
    (when callback
      (funcall callback $p1 $p2))
    (list $p1 $p2)))

Let me walk you through the parts here in case Lisp is odd for you to read:

  • When a region is active, use the region beginning and end points for $p1 and $p2.
  • When no region is active, move the insertion point to the beginning of the word, save that as $p1, skip to the end of the word, save that offset as $p2. (And restore the original position thanks to the save-excursion decorator.)
  • If a callback function is given, pass the two points.
  • Always return a tuple of points via (list $p1 $p2).

Now I can get the tuple of points if I need, or I can tell the function to call another function and forward these points.

The implementation thus shrinks down to one-liners:

(defun ct/capitalize-word-at-point ()
  (ct/word-boundary-at-point-or-region #'upcase-initials-region))
(defun ct/downcase-word-at-point ()
  (ct/word-boundary-at-point-or-region #'downcase-region))
(defun ct/upcase-word-at-point ()
  (ct/word-boundary-at-point-or-region #'upcase-region))

;; Set global shortcuts
(global-set-key (kbd "M-c") #'ct/capitalize-word-at-point)
(global-set-key (kbd "M-u") #'ct/upcase-word-at-point)
(global-set-key (kbd "M-d") #'ct/downcase-word-at-point)

I prefer these one-liners over repeatedly unpacking 2 points frm a tuple that was returned.

The actual capitalization should maybe be implemented a bit different, though: upcase-initials-region only changes the case of the initials and leaves the remainder untouched, unlike capitalize-word which lowercases the rest. "fizzBUZZ" thus becomes "FizzBUZZ". My expectation is for the whole word to change, not just the initial characters, so I prefer to downcase the whole word first and then capitalize the initials for my current task:

(defun ct/capitalize-region (p1 p2)
  (downcase-region p1 p2)
  (upcase-initials-region p1 p2))
(defun ct/capitalize-word-at-point ()
  (ct/word-boundary-at-point-or-region #'ct/capitalize-region))

I have to say I really like function composition.

By the way, I also evaluated to train myself to expand the selection to the current word first and then call the built-in case changing functions. There’s tools for that. But that sucks, and the default behavior of the built-in functions still is odd.

  1. If you’re new to Lisp, using only cdr is like a dropFirst call on an array, still returning a list, but with 1 element in this case. car then fetches this element. And cadr is a shorthand for this common combination to fetch the butt of a list, so to speak. 

→ Blog Archive