Thread Safe Property and Resource Access with the Transaction Wrapper

Here is a transaction type to copy & paste into projects to encapsulates thread-safe read/write access:

struct Transaction {
    let queue: DispatchQueue

    func read<T>(_ value: @autoclosure () -> T) -> T {
        return read(block: value)

    func read<T>(block: () -> T) -> T {
        return queue.sync {

    func write(block: @escaping () -> Void) {
        queue.async(flags: .barrier) {

Just make sure to not use the main queue, because .sync call from main to main will deadlock your app!

It ensures you read values synchronously, which isn’t dangerous, and enqueue and execute write operations in order. This is useful if you need to access any resource from multiple threads and want to avoid the overhead of mutex locks.

I called it Transaction because the result is access to resources in a way similar to ACID transactions: complex operations are executed in a batch (Atomicity), write operations don’t get in each other’s way (Consistency & Isolation); the Durability part is debatable, though.

Usage example

Here’s how you’d wrap read and write operations for a totally contrived example:

class Counter {
    private static let queue = DispatchQueue(label: "counter", qos: .background)
    private var count = 0

    var tx: Transaction {
        return Transaction(queue: Counter.queue)

    func increment() {
        tx.write {
            self.count += 1

    var currentValue: Int {

Checking if this really does help

Here’s an example program to demonstrate the effects:

let counter = Counter()
for _ in (0 ..< 20) { .background).async {
        // This is actually bad sample code because it's two calls where time passes in between
        print(counter.currentValue, terminator: ", ")

It’s output:

10, 10, 10, 10, 10, 10, 10, 10, 10, 17, 10, 18, 18, 18, 19, 20, 20, 20, 20, 20

You see there are 20 print statements, but they don’t happen right after the increment. They are still in order, but apparently some “read” calls have been intercepted by other “write” operations and had to wait, e.g. the last 5.

Here’s one without the Transaction‘s queue being used, demonstating how random the values can appear if you read and write willy-nilly:

14, 14, 15, 14, 16, 15, 15, 19, 20, 20, 15, 20, 15, 20, 20, 20, 20, 20, 20, 20
            ^^      ^^  ^^              ^^      ^^

It’s even more apparent with 100+ iterations, but I won’t print these here.

Property Wrapper variant

It cannot be a struct because async mutations need to capture a mutable reference to self in the async setter.

Also keep in mind that one-line increments like += 1 actually equal one get plus one set call. That will produce results you do not anticipate. The counter example above with a naive += 1 ends at the value 5 after 100 iterations on my machine, for example, because a ton of reading happens pretty quickly and asynchronously before a write operation sets up the barrier.

Do access the backing property directly with the underscore prefix and call e.g. _counter.mutate { $0 += 1 } to do it all at once.

That is, in effect, similar to tx.write { self.count += 1 }.

final class Transaction<Value> {
    private let queue: DispatchQueue
    private var value: Value

    init(wrappedValue: Value, queue: DispatchQueue) {
        self.queue = queue
        self.value = wrappedValue

    var wrappedValue: Value {
        get { queue.sync { value } }
        set { queue.async(flags: .barrier) { self.value = newValue } }

    func mutate(_ mutation: (inout Value) -> Void) {
        return queue.sync {

And then use it like so:

class Counter {
    private static let queue = DispatchQueue(label: "counter", qos: .background)

    @TransactionWrapper(wrappedValue: 0, queue: Counter.queue)
    private(set) var count

    func increment() {
        _count.mutate { $0 += 1 }

Conclusion and usage

I use the Transaction type for access to a cache and for object repositories. Works like a charm and the call site doesn’t get too distracting. A very useful tool.

Heads up: There’s no need to go all-in with this. The overhead of async calls with the .barrier flag is not to be underestimated. Simply setting the value is many magnitudes faste. Only protect resources that you absolutely need to, and only introduce concurrency when you cannot get by without. Network API calls should not write to resources via transaction; they should finish in the background and then produce the result to your app on the main queue.

I didn’t invent any of this. There’s plenty of discussion on the web. For some more recent iterations on the topic, see also:

Swift API Docs for String.index(_:offsetBy:limitedBy:) Is Still Misleading

When you look at the docs for String.index(_:offsetBy:limitedBy:), you get this description:

Returns an index that is the specified distance from the given index, unless that distance is beyond a given limiting index.

The “unless that distance is beyond a given limiting index” part got my attention. I remembered it to be a pain to use in practice. The overall API docs page looks way too innocent for that.

So what’s the example code?

let s = "Swift"
if let i = s.index(s.startIndex, offsetBy: 4, limitedBy: s.endIndex) {
// Prints "t"

Paste it in a Playground. You’ll see that, yes, this prints t.

But what if I change the offset to 5 to actually see how the limit at the limit works?

Here’s the changed code for you to copy and paste, with the 5 instead of 4:

//  /!\  Warning! Do not use in production!   /!\
let s = "Swift"
if let i = s.index(s.startIndex, offsetBy: 5, limitedBy: s.endIndex) {

What does it print?

Nothing, you hope?

Because the String is too short?

(cue sad trombone)

Surprise! It’s a runtime error!

“Ok, Christian, the API sample was just for demoing a valid use. You inferred something wrong from what you saw”, you might say.

Did I?

The String.index(_:offsetBy:limitedBy:) docs go on like this:

The next example attempts to retrieve an index six positions from s.startIndex but fails, because that distance is beyond the index passed as limit.

let j = s.index(s.startIndex, offsetBy: 6, limitedBy: s.endIndex)
// Prints "nil"

… which is true, this code returns nil and does not crash. Oh, so it’s safe to try to get to indices beyond the bounds, nice, right?

Up next is this hand-wavy explanation:

The value passed as n must not offset i beyond the bounds of the collection, unless the index passed as limit prevents offsetting beyond those bounds.

Remember: This is an API documentation, not a marketing text. So is it safe to try to access an index beyond the bounds? It’s hard to say. All of this can be utterly misleading.

What does “prevent offsetting beyong those bounds” mean?

Is it allowing indexes up to or including the limit?

The docs don’t tell. You have to force yourself to consider this to be a vague statement and test its implications. Experiences programmers will undoubtedly do that thanks to years and years of trial by fire – but that doesn’t make the docs any better. If you start developing in Swift, you might not notice the vague statement here, and the omission of a specification of what happens when index + offset == endIndex.

If you copy the sample codes from the docs to play around, and only test the resulting algorithms in your program with an offset somewhere inside the bounds of the string and and somewhere way past the end of the string, you might be tricked into believing your algorithm is working as expected. – It doesn’t, though, because these tests made you overlook what happens when the result is equal to endIndex. That’s false confidence. The code does work for the cases you tested, yes, but it utterly crashes for one case.

So the crux is this: the offset may not be set so that the resulting index is at endIndex. Because that’s not a valid limit.1

From my understanding, the API docs is conveying the wrong information through these examples. (So my initial doubt in how simple it all looked was warranted, and now I’m becoming even more paranoid, great!) The function works real nice when you know what to do, but the String.endIndex doesn’t mean what you might take away from the sample.

The docs on endIndex says:

A string’s “past the end” position—that is, the position one greater than the last valid subscript argument.

You want the limitedBy: parameter to not be past the end, you want it be at at the end. Why is the sample code using an obviously incorrect approach to show how the function works?

That means you need to figure out the index before endIndex, provided the string is long enough to actually cover that. (An empty string has its startRange equal endRange.)

The following is an actually useful, safe way to limit String index access to the String’s real length:

let s = "Swift"

if let indexAtEnd = s.index(s.endIndex, offsetBy: -1, limitedBy: s.startIndex),
    let i = s.index(s.startIndex, offsetBy: 5, limitedBy: indexAtEnd) {
// nothing happens, as expected!

It’s also ugly as heck.

Remember: that’s why you need to be extra cautious and thorough in your unit tests, kids.

Always be testing

  • right before,
  • at, and
  • right after the limits you want to verify.

Don’t just test somewhere before and somewhere after the limit. Imagine your tests are clamps, and your problem is being clamped so tight it cannot escape. No wiggle room, no margin for error.

—Now guess who was bitten by not testing that way in a particular place.

In other words, an update to The Archive is coming out soon.

I also filed a Feedback issue for this, finally.

  1. I know, you probably understood this point about 5 paragraphs earlier already; but no amount of editing this post seems to make the undertone of disappointment go away. I just can’t get rid of it. It … Is. Too. Strooooong. 

MultiMarkdown Filter for nanoc

I recently dropped blog posts rendered via MultiMarkdown. I used MMD to support citations, but this is not a book, this is a website!

So I retired my MultiMarkdown processor for nanoc, the static site generator that I use.

If you need something like it for your project, here it is:

require "systemu"

class MultiMarkdownFilter < Nanoc::Filter
  identifier :mmd
  type :text

  def run(content, args = {})
    output = ''
    stderr = ''
    status = systemu(
      [ 'multimarkdown' ],
      'stdin'  => content,
      'stdout' => output,
      'stderr' => stderr)

    unless status.success?
      $stderr.puts stderr
      raise RuntimeError, "MultiMarkdown filter failed with status #{status}"


It requires a local installation of the MultiMarkdown binaries. This is not part of any Ruby gem that I’m aware of, so you need to install it separately, e.g. via brew install multimarkdown.

Use it in your Rules file:

compile '/posts/**/*' do
  filter :erb
  filter :mmd  # <----
  layout '/default*'

Lock App Features Behind a Paywall and Enforce the Lock in Code

I stumbled upon an interesting coding problem in a recent macOS project related to in-app purchases. IAP can be represented by a feature option set in code. How do you secure UserDefaults access in such a way that accessing values can be locked via the IAP available feature options? (This also applies to tiered licenses, like a “Basic” and a “Pro” feature set.)

Let’s say you represent the purchase-able features as such:

struct Feature: OptionSet {
    let rawValue: Int

    init(rawValue: Int) {
        self.rawValue = rawValue

    static let export = Feature(rawValue: 1 << 1)
    static let analysis = Feature(rawValue: 1 << 2)
    static let advancedProcessCounter = Feature(rawValue: 1 << 3)

You can use these in code like this:

let purchasedFeatures: Feature = [.analysis, .export]

Let’s say you can get there through a service object that looks at the current license to figure out which in-app purchase is unlocked. For the sake of this post, it doesn’t matter if you use Apple’s API, the Paddle SDK, or a home-grown solution.

class LicenseReader {
    func purchasedFeatures() -> Feature { ... }

You store the value for advancedProcessCounter in a file or UserDefaults, because that’s simple.

private let advancedProcessCounterKey = "Advanced.processCount"

class AdvancedStuff {
    var processCounter: Int {
        get {
            return UserDefaults.standard.integer(forKey: advancedProcessCounterKey)
        set {
            UserDefaults.standard.set(newValue, forKey: advancedProcessCounterKey)

    // Here be code that displays/uses the counter value ...

Now let’s hide this behind a paywall.

In your first approach, you ask the LicenseReader for available features directly and either read/write the value, or default to 0 if the user didn’t purchase that advanced thingie. It’s a valid approach to test if the paywall works at all:

private let advancedProcessCounterKey = "Advanced.processCount"

class AdvancedStuff {
    private let licenseReader: LicenseReader = ...
    private var isProcessCounterUnlocked: Bool {
        return licenseReader.purchasedFeatures().contains(.advancedProcessCounter)

    var processCounter: Int {
        get {
            guard isProcessCounterUnlocked else { return 0 }
            return UserDefaults.standard.integer(forKey: advancedProcessCounterKey)
        set {
            guard isProcessCounterUnlocked else { return }
            UserDefaults.standard.set(newValue, forKey: advancedProcessCounterKey)

    // Here be code that displays/uses the counter value ...
  • LicenseReader.purchasedFeatures is used to test for availability of the .advancedProcessCounter feature
  • isProcessCounterUnlocked wraps this check and prevents access to the stored value

You run the app, things work. Nice.

Now you modularize the growing app and want to extract the processCounter reading/writing into another object. This way, your app doesn’t need to know if the setting is stored in UserDefaults, a PList somewhere else, or in the cloud at this point and can defer the knowledge to a pluggable component.

Only now you notice that the I/O of the setting is closely coupled to the LicenseReader, which makes reorganizing the code harder. You don’t want to dependency inject the LicenseReader, either.

The whole license-reading business should not be relevant for reading the value from its source.

But you want the value to be “locked” behind a paywall as strictly as possible.

Deadbolt Door Lock photo courtesy of Tony Webster, CC BY 2.0

Encapsulate Value Access in a Paywall Object

Currently, the processCounter: Int property directly returns the stored value or a fallback if the paywall prevents access. How can you strictly require a feature availability check but still reduce the functionality of the defaults reader to simple reading the value?

You can split this into multiple steps:

  1. Read the real value from disk/defaults/…
  2. Wrap it in a paywall check before returning
  3. Satisfy the paywall check when the value is used

If we were thinking about value transformations, it’d be of the form:

() -> (value: Value, required: Feature) -> (available: Feature) -> Value?

We’ll have a look at an object-based approach now.

Instead of directly running a query method on LicenseReader right now, you defer this call to a later point in time. And also abstract away the details.

class SettingsReader {
    var advancedProcessCounter: Paywall<Int> {
        // 1) Read the real value
        let value = advancedProcessCounterKey)

        // 2) Wrap it in a deferred paywall check
        return Paywall(value: value, 
                       requiredFeatures: .advancedProcessCounter)

With this code, the real value is read, there’s no check at this very point, but we can still express a feature requirement.

Here’s a Paywall implementation:

struct Paywall<Value> {
    private let value: Value
    let requiredFeatures: Feature

    init(value: Value, requiredFeatures: Feature) {
        self.value = value
        self.requiredFeatures = requiredFeatures

    func value(availableFeatures: Feature) -> Value? {
        var covered = requiredFeatures
        guard covered == requiredFeatures else { return nil }
        return value

The magic happens in Paywall.value(availableFeatures:): it protects access to the value. You only get the value if at least all required features are passed in as the parameter. If one or more are missing, you get back nil.

Now your client code, like view controllers, can provide the glue:

class AdvancedComputationViewController: UIViewController {
    let settingsReader: SettingsReader = ...
    let licenseReader: LicenseReader = ...

    var processCount: Int {
        let paywall: Paywall<Int> = settingsReader.advancedProcessCount
        let value: Int? = paywall.value(availableFeatures: licenseReader.purchasedFeatures())
        return value ?? 0

    // ...

You see how the LicenseReader is used to attempt to unlock the Paywall<Int>. If it fails, you return the default value 0 (same as in the very beginning).

The Paywall type forces you to pause for a second when you want access to protected details. You need to figure out a way to pass in the availableFeatures option set, or else you cannot get to the value. The type forces you to remember that this value is locked.

The knowledge about “being locked” is encapsulated in the Paywall type.

A more naive implementation could have read the real process count and then implemented the paywall-check in place:

var naiveProcessCountWithPaywall: Int {
    let value = advancedProcessCounterKey)

    // The paywall is here
    guard licenseReader.purchasedFeatures().contains(.advancesProcessCounter) else { return 0 }

    return value

Problem is: you can easily overlook that this is supposed to be a paywall-protected value. Forgetting the guard statement ruins the whole reason of making the feature purchase-able. Oops.

A type like my Paywall prototype above makes the type system do the work. That’s a beautifully, Swift-y way to help avoid problems instead having to remember them and write unit tests to catch errors.

It also helps splitting your app into multiple, decoupled modules. The SettingsReader does not need to know how available features and the license code is accessed. This is useful to e.g. inline your license-accessor code, making discovery of license-related code harder for crackers.

Show Light Text on Dark Recessed NSButton with an Image

This is how I made dark NSButton with the NSButton.BezelStyle.recessed display legible light text on dark background with macOS Mojave an up.

Recently, a user of The Archive pointed out that the in-app guide doesn’t display properly with a light system appearance. In dark mode, you wouldn’t notice the problem, but in light mode, the button colors rendered them illegible. Only when you press a button does its text light up properly for its background color. Have a look:

Cannot make out the buttons? Yeah, me neither. Used to work, though.

Something must’ve changed over the last 12 to 18 months or so, because I am very positive that the colors were legible when I introduced the feature – the screenshots and video from the GitHub project README clearly show so.

Unlike UIButton, NSButton doesn’t display its text with a label. So you cannot just set titleLabel.color to a meaningful, preferable “semantic” system color. You have to customize attributedTitle, which is an NSAttributedString. Changing the color means setting .foregroundColor to a different value.

This level of contrast is much better.

Generally, I despise attributed string API’s in macOS since I began fiddling with NSTextViews. That’s because the default setting there is not an empty attribute collection but includes a slew of settings, including a custom font and font size. You cannot just say “I want to use a default text view but change the font family” without reading the previous attributes, including the font size. It’s not that much of a problem in practice, but I would rather do it in a different way.

Even though I found that simply replacing the default attributedTitle completely didn’t look odd at all, I don’t want to accidentally break macOS version dependent attribute settings, so here, too, we’ll apply the foreground color carefully as a patch:

open class DarkButton: NSButton {
    override open var title: String {
        didSet {

    private func updateTitleColor() {
        var attributes = self.attributedTitle.attributes(
            at: 0,
            longestEffectiveRange: nil,
            in: NSRange(location: 0, length: self.attributedTitle.length))
        attributes[.foregroundColor] = isEnabled ? NSColor.white :

        self.attributedTitle = NSAttributedString(
            string: self.title,
            attributes: attributes)

    // ...

The image is a template image and doesn’t color itself properly either. I didn’t find a conclusive way to affect a template image’s color: it appears that NSButtonCell is responsible for this and theoretically comes with customization options in its setCellAttribute(_ parameter: NSCell.Attribute, to value: Int) method. But fiddling around with that didn’t yield the results I expected. A push button has two states, and its cell can be customized to draw differently in each state. I don’t want to override the cell or fiddle around with its properties for highlighted/selected images.

Instead, I make a colored copy of the template image and found this to work quite nicely.

extension NSImage {
    fileprivate func tintedNonTemplate(color: NSColor) -> NSImage {
        let image = self.copy() as! NSImage


        let imageRect = NSRect(origin: NSZeroPoint, size: image.size)
        imageRect.fill(using: .sourceAtop)


        // To distinguish the original vector template from the tinted variant, 
        // make the copy a non-template.
        image.isTemplate = false

        return image

To adjust the color later when the isEnabled state changes, I cache the original template, though:

class DarkButton: NSButton {
    // ...

    open override var image: NSImage? {
        didSet {
            guard let image = self.image else { return }
            guard image.isTemplate else { return }
            self.templateImage = image

    var templateImage: NSImage? {
        didSet {

    override open var isEnabled: Bool {
        didSet {

    private func updateImageFromTemplate() {
        guard let templateImage = self.templateImage else { return }
        self.image = templateImage.tintedNonTemplate(color: self.isEnabled ? .white : .black)

This isn’t meant to be a drop-in base class replacement for all of your buttons in all of your apps. This does introduce some weird behavior, like accepting a black template image but rendering a white non-template image instead. It’s a hack. It violates the Liskov Substitution Principle. I know all that. But it gets the job done for this one interface element.

The updated source of the overlay library is on Github:


Whole Value Pattern

I often forget the name of this thing, then I search for it, and forget it again later.

It’s the Whole Value Pattern.

The “Whole Value” pattern means you should get rid of using primitive or literal data types as quickly as possible. (Since Swift has no non-object primitives, you have to look a bit harder to spot these, but “literal value” is a pretty good indicator.)

Example: Instead of passing an Int of the value 21 around, create a Weeks(3) object to properly carry the meaning or intent with it.

Primitive types can represent anything; specialized objects cannot.

I’m not certain if Foundation’s Date type counts. It represents a point in time better than, say, the millisecs() timestamp return value. But it’s also different from e.g. Weeks or Days, or a proper Money type (which you should get quickly if anyone on your team try to model currency as floating point values!).

(My fairly old notes on this pattern reference “The CHECKS Pattern Language of Information Integrity” for details. The whole website is a source of amazing stuff.)

Maybe Call Your UI Configurion Objects ViewData Instead of ViewModel

Joe Fabsevich (@mergesort) proposes to call data to configure UI elements “View Data” and keep the objects dumb.

From part 1:

In my experience, things become harder to maintain when they start becoming a crutch, as a place to put your code if it doesn’t neatly fall into the Model, View, or Controller label.

With this in mind, I realized we need an answer for configuring our views in a way that’s maintainable, and ultimately transforms one or multiple models into a view.

Joe writes that it’s futile to look for a “view model” definition (in the MVVM pattern sense) because there are so many; and that in most cases you don’t need a pattern where the view model exposes behavior to control view components anyway. Simple configuration objects often suffice.

It’s often enough to lump together related configuration properties into what he calls ViewData.

Views then expose a configure(viewData:) method. You pass in the data, the UI component updates itself accordingly.

Read Joe’s articles here:

  • Part 1: general set up, relationship between ViewData and UI components.
  • Part 2: how to map complex state to different complex UI data, and handle user interactions.

I think the view component itself is best place to put this stuff, too. It’s the view’s responsibility to display something. So give it a configure method and let it do its job!

If you follow online tutorials, you often end up with the view controller reaching into the view’s boundary to set its properties. That’s contrary to common OOP heuristics which have proven to be useful in a lot of cases (read: not 100% of the time, but often, so apply with caution) – like the “[Single Responsibility Principle](” which is violated if the controller also has the responsibility to do all the view’s work for them, and the general idea of treating objects as self-contained black boxes. People also speak of “train wrecks” when there are objects.reaching.into.objects.a.lot.

I’m still a fan of also using the term “view model” for this purpose, as in “a model of the view”, but if you get confused by the overloaded pattern name, then I guess ViewData is just as fine.

You can also call it a “parameter object” or “argument object” because it lumps together all the necessary configuration parameters of a view component’s configure method, and not much more. The cohesion of the view data object may be solely determined by the requirements of the view component – which may be arbitrary, but a necessary evil for display purposes. This doesn’t qualify as a “model” in a traditional sense that I can think of.

AppMover Swift Library to Move Your macOS App to the Applications Folder

Oskar Groth published a modern iteration of the “LetsMove” framework where you can show a dialog at app launch, asking the user if she wants to move the app to /Applications first.

This is still a crucial feature if you distribute downloads that unpack into the ~/Downloads folder: Gatekeeper’s App Translocation will actually start it from a random sandboxed folder. To prevent this from happening, distribute your apps as a DMG with a shortcut to /Applications to nudge your users to move the app there directly.

Get on GitHub.

See also:

Community Trumps All

This is not a pun on the U.S. president. Going through stuff from the past year, I just noticed how much more amazing daily life feels with a community. In the past 2 years, I helped found two communities:

  1. The Zettelkasten Forum, a digital hangout for supportive and overall amazing people discussing creative knowledge management, and
  2. a local group of sketchers and artists who meet every other week or so to hang out, sketch, draw, paint and photograph Bielefeld-local stuff and talk about creating art.

Without the local group of Urban Sketchers, I wouldn’t have progressed with my watercolor skills; and without the forums, there wouldn’t be a lot of places to hang out to talk about what I find most interesting about personal knowledge management: creating new insights!

So 2019 apparently was my personal year of community, if I were looking for a motto. Thanks, folks, for being part of this whole thing!

→ Blog Archive