Hosting Downloads on Amazon S3 with CloudFront

Since early 2019, I host downloads for my app The Archive on Amazon’s servers. The S3 bucket is a cheap-enough storage of the zip files, and the CloudFront cache is a content distribution network across the globe that improve download speeds.

Here’s a long tutorial, because I will most likely forget how I did all this in a while, and chances are you don’t know how to do this either.

  • Don’t be afraid. There’s a lot of things you may not now, but they are not complicated. Only very bare-bones.
  • It’s quite fun, actually. When I had the setup running, I felt like a cloud computing wizard. With this, you could set up websites with fast response times around the globe and scale easily!

Backstory and End Result

A couple of years ago, I dabbled with Amazon S3 storage to host downloads of my macOS apps.

One lesson I learned: do not point the app’s update feed setting to your S3 bucket. That’s too fragile. Use your own domain and redirect if needed. This way you retain 100% control of where the traffic goes. If you point to S3 directly, you give away part of your control. And when Apple enforces TLS with macOS updates, you end up with users not being able to download from the server you cannot control. Dang.

But hosting website downloads on AWS S3 still works fine. Add CloudFront on top of it, and your downloads will be served from a server closest to the request’s origin. It’s a web cache of sorts, useful for content distribution. Using CloudFront actually reduces traffic cost, so that’s what we’ll be using.

This post is how I configured an Amazon S3 bucket with the modern non-public settings, and then set up CloudFront to actually publish the contents. In this case, I publish an index.html file to prevent directory listings. The rest is .zip files for downloads.

This is the result, using a .htaccess redirect of the download endpoint on my website:

$ curl -I
HTTP/1.1 303 See Other

I’m not an expert in cloud computing, cloud hosting, or cloud anything, really. Amazon S3 was the fanciest thing I used. I once read that you could host static websites in S3 buckets, but it didn’t sound as cost-effective, so I never looked into it any further. But with the knowledge from this post, you will be able to host a static website on S3 with great response times thanks to CloudFront if you want.


You will need an AWS account. Then click around in the console to familiarize yourself with the menu. You will need to switch between S3 and CloudFront, so try to locate the items in the huge services list, or use the search.

We will be creating a new bucket to host files from. If you don’t know anything about any of this and want to play around with S3 first, follow along the instructions from the manual:

  1. Sign up for Amazon S3
  2. Create a Bucket
  3. Follow the other steps for uploading and looking at files. Once you have a bucket, you can play around with the bucket “file” browser and upload stuff, click on the file, and view the absolute URL for reference.

Chances are you won’t be able to download files or view HTML pages from S3 buckets because public read access is not enabled. And granting public read access is discouraged nowadays, as we’ll see shortly.

Amazon Access Rights

The access right levels for S3 directories are:

  • List objects”,
  • Write objects”,
  • Read object permissions”,
  • Write object permissions”.

For files, read access is called “Read object”.

You will not want to give anyone access to file or directory permission settings. “List objects” and “Read object” settings are most interesting.

Naively, I was granting read access (“List objects” and “Read object”) access to “Everyone” when I began hosting downloads on Amazon S3. But it turns out that generating lots of traffic with direct downloads from S3 buckets is more costly than using the CloudFront distribution service. So it pays off to enable the CloudFront CDN to cache files in multiple data centers around the world, backed by a single S3 bucket as the repository.

CloudFront being a cache means changes to the repository will not be visible immediately. Keep that in mind when you experiment with settings. You will need to reset the CloudFront cache in these cases to force re-caching. We’ll get to that later down the road.

Public read access is considered a bad practice since IT companies apparently leaked private data this way. CloudFront is supposed to provide another layer around this and safeguard against putting private files into public folders.

You will want to keep all access levels enabled for your own AWS account, of course.

So here’s how to use Amazon S3 to host files (or a static website) and offer download links using the CloudFront content distribution network.

Set up File Storage

First, you need an Amazon S3 Bucket to upload files.

Create a New Bucket

Navigate to S3 in the the AWS Console:

There, select “Create Bucket” and enter a readable ID for the bucket name. Its domain will be something like, so you maybe want to keep this recognizable. During the permissions setup step, keep the public access blocking enabled. This will make AWS complain when you try to change a file to be publicly visible.

It’s supposed to be for your protection, and we don’t need direct S3 access in a minute anyway, so stick with this.

Change an Existing Bucket for New Access Management

I had an existing bucket, so here’s what I did.

The admin URL for your bucket is You may want to keep this open in a separate browser tab.

Go to your S3 bucket’s permissions. (Select S3 from the Services menu; then select your bucket from the list; then select the “Permissions” tab.)

Select the “Access Control List” permission setting page. (Yeah, another layer of tabs below tabs. I get lost in the AWS console a lot.) Remove “List objects” access for everybody but the owner.

Select the “Public access settings” permission setting page and edit the resulting settings. Enable all checkboxes to block future uploads from being made public, and retroactively removing public access settings from all available files.

Now the S3 bucket contents cannot be accessed via direct links anymore. Next, we set up the CloudFront CDN to provide the actual downloads.

Set up CloudFront for Distribution

With the bucket’s content being sufficiently protected via private read-only access, we can add the CloudFront layer to manage actual publication of our contents.

Navigate to the CloudFront service in the AWS console:

This is a two-step process: you will create a CloudFront distribution (resulting in a public URL) and a CloudFront user that will have read access to your files.

I think this is similar to local web server configurations where your site content is generally protected in your user’s home folder, but the Apache web server can read and show the data.

To save yourself some manual setup steps, we start with the user setup because then CloudFront does most of the grunt work.

Create a CloudFront “User”

To grant CloudFront read access to your bucket, you have to link the two services. This works by creating a CloudFront user, so to speak.

With CloudFront still open, in the menu to the left select Origin Access Identity (OAI) below the “Security” sub-heading. Create a so-called OAI, which will create one such user for you.

The created OAI consists of a short ID and a long canonical user name hash. You need the latter to grant access to individual files. You need the former for bucket policies later on, though. So keep both handy.

I entered “The Archive downloads” as a comment for the ID to identify it later.

Create a CloudFront Distribution

Select “Create Distribution”, then “Get Started” for a Web Distribution.

Origin Name is your S3 bucket. Click inside the field for suggestions.

Origin Path can be left empty when you want to publicize the whole bucket. In fact, I used a public sub-directory.

Now make sure to choose Restrict Bucket Access (Yes) and then, for Origin Access Identity, pick “Use an Existing Identity”. In the drop-down, select the CloudFront user you just created. (If you edit an existing distribution, this setting will be tied to items in your “Origin” tab, not in the general distribution settings.)

For your convenience, also pick “Yes, Update Bucket Policy” for Grant Read Permissions on Bucket. That’ll create the policies for you, which is nice.

I left most other settings at their defaults. HTTPS is nice, and I don’t need to forward query parameters for downloads, so no need to change any of these.

I did change Default Root Object to index.html so I can add a simple HTML file with a link back to the main website in case someone copies the download link and tries to browse the directory.

Create the distribution.

You will need to wait a couple of minutes (about 15 in my case) until the distribution is ready and not “In Progress” anymore. Then you can access the S3 bucket files using the new CloudFront URL aka domain name.

How You Could Manage CloudFront Access Yourself

Per-File Setup

Go to your S3 bucket’s permissions. (Select S3 from the Services menu; then select your bucket from the list; then select the “Permissions” tab.)

Select the “Access Control List” permission setting page. (Yeah, another layer of tabs below tabs. I get lost in the AWS console a lot.)

Below the Access for other AWS accounts heading, select “Add Account”. Paste the CloudFront canonical user ID you just generated and tick “List Objects”. Now your CloudFront has read access to your bucket and can list all objects.

Repeat this step for every file you want CloudFront to access.

For changing directories of stuff, like my app updates, I rather not rely on a manual process and use bucket policies instead. Think of them like regular expressions for paths to apply access rights.

CloudFront Access Rule in a S3 Bucket Policy

Look at your S3 Bucket Policy. (Select S3 from the Services menu; then select your bucket from the list; then select the “Permissions” tab; then select the “Bucket Policy” sub-tab.)

If CloudFront finished its set-up work, you will see an active policy already.

If not, or if you skipped the generation step above, here’s what the policy looks like.

You can use the bucket policy generator to some extent. The hardest part for me was to find out how to specify the CloudFront user ID. StackOverflow to the rescue and reading the docs, the following template is what I ended up using. Note that the Id and Sid are human readable strings with a unique timestamp so I avoid collisons in the future:

    "Version": "2012-10-17",
    "Id": "CloudFrontAccess20190224100906",
    "Statement": [
            "Sid": "CloudFrontReadAccess20190224100921",
            "Effect": "Allow",
            "Principal": {
                "CanonicalUser": "CANONICAL_USER_ID"
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::BUCKETNAME/*"

Adapt the template to your settings:

  • Replace BUCKETNAME with your bucket name;
  • and replace CANONICAL_USER_ID with the long OAI canonical user ID from earlier.

The s3:GetObject action is the read content/list files access right. You can come up with your own Statement ID (Sid) and policy ID (Id) if you want.

If you save the policy and refresh your browser, you’ll notice that AWS replaced this line:

"CanonicalUser": "CANONICAL_USER_ID"

Instead, it’ll read:

"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity SHORT_CLOUDFRONT_ID"

You can also add additional statements to deny read access to any user except the CloudFront user if you want. Find details in the StackOverflow post. I don’t use this.

How to Refresh CloudFront

If you play around with files and access rights, you may end up with CloudFront serving files that no longer exist, or denying access even though you fixed your permissions.

That’s because CloudFront is a cache. Its cache is not refreshed on every request – that’d be pointless. Instead, you need to tell CloudFront to forget its cached contents.

Head to you CloudFront Distribution list, select your distribution, then go to the Invalidations tab.

Select Create Invalidation and enter a Wildcard, *, to invalidate all cached files. The new invalidation request will show in the list and be “In Progress”. After a minute, my invalidations finish and you can try to access the public URLs again and see if it works now.


Cloud services still feel weird to me. I never know when things correctly update an if the services work together properly. I was probably pasting the download URLs into my web browser a hundred times to test the availability. But I’m happy to have learned that CloudFront caches my download all around the globe: this means faster download times! Also, it is supposed to reduce cost. I’m looking forward to next month’s bill to verify.

The strangest thing about all this is the eventual consistent service setup. You create a service, then another, but things don’t work right away. Changes take time. It takes a couple of minutes to reflect the new synchronized state you were creating. It’s a flexible model, but a bit awkward to debug at first.

Another bonus of adding CloudFront as a layer around my S3 bucket is the use of custom SSL certificates. Apple’s download restrictions (aka “Application Transport Security”), introduced a couple of years ago, made it impossible to download app updates from S3 buckets directly. The shared certificate didn’t meet the restrictions Apple imposed. That was when I changed update hosting to my own web server. Once I configure the CloudFront domain’s certificates properly, I can once again switch to provide downloads via Amazon’s servers an save a couple of Euro cents more, yay!

A few years ago, the AWS console was worse. I couldn’t do anything besides the most basic tasks. It seems the situation has improved a lot. Or I became smarter. Or less afraid to break things, I don’t know. Either way, it was a pretty simple process to set up my own public file download CDN this time.

Fixed the Blog Archive

I was looking for an old post and found that the paginated list of blog posts didn’t quite contain all the posts I expected. Turns out I introduced an off-by-1 bug there, computing pages for 10 posts per page, but working with all_posts.each_slice(10-1), effectively dropping 1 post per page.

Of course my website isn’t unit-tested, so no wonder :)

Sync Emacs Org Agenda Files via Dropbox Without Conflicts

I use beorg on iPad/iPhone to view my Emacs Org files. Until today, I did use it only for viewing files, not edit tasks, because I ran into sync conflicts.

This is how I solved the problem with a few simple1 settings.

Put org files into Dropbox

First, put your org files into the Dropbox. I have all .org files in one directory and use the directory as org-agenda-files:

(setq org-agenda-files (quote ("~/Pending/org/gtd")))

This will automatically expand directory contents using org-agenda-file-regexp, which by default is set to:


So it matches all non-hidden org files, nice.

Prevent sync conflicts

When you sync files, you externally change the files that correspond to buffers that are probably open at all times if you use Emacs org agenda. Once you edit files on your computer, either saving will not work because of conflicts, or you will be asked if you want to read buffer contents from disk again.

One part of avoiding conflicts is to save relevant org files often, the other part is auto-reloading from disk when possible.

Auto-save org buffers to disk

You can enable auto-saving of org files while emacs is running with a hook:

(add-hook 'auto-save-hook 'org-save-all-org-buffers)

That will put auto-saving all open org files on a timer. Performing changes to buffers from the org agenda overview, for example, doesn’t mark the buffer as needing to auto-save, as far as I understand. So this setting helps to auto-save all org buffers regularly.

Auto-reload buffers when files on disk change

Then you can enable auto-reloading from disk when you change files in the Dropbox.

I am not comfortable having global-auto-revert-mode turned on, I’d rather limit this to my task lists. But if this is cool with you, just put (global-auto-revert-mode t) into your init.el file.

If you’re like me and want to set only the task-related org buffers to auto-reload, you need to enable auto-revert-mode minor mode for these buffers. That will reload the buffer when the file changes like the global variant, but limited to a particular buffer.

Do not set this manually. There’s a better way: auto-executing commands when files are loaded.

Emacs will execute whatever you put into a .dir-locals.el as soon as you open (aka “find”) a file from that directory. This can be used to enable auto-revert-mode for all org files in your Dropbox.

In my case, the path would be ~/Pending/org/gtd/.dir-locals.el, and this is what I put inside:

; Enable auto-revert-mode for buffers from files from this directory:
((nil . ((eval . (auto-revert-mode 1)))))


Now when I open my org agenda, all org-agenda-files will be loaded to populate the agenda – that is all contents from this directory. Each and every corresponding buffer will have auto-reload-mode enabled, and thanks to the global settings, will save regularly to disk.

This isn’t bullet-proof. I can still produce file conflicts easily when I change files on disk before the auto-save is triggered.

But you know what? I didn’t even bother checking the auto-save interval settings. I want to save web links to my and tick off items from my mobile devices, that’s it. And when I’m on my mobile devices, minutes will have passed since I left my Mac and the files will already have synced. Same in the other direction. I don’t sit in front of my Mac and tick off stuff on my iPad. That’d be stupid. So I don’t produce sync errors under regular circumstances.

I’m happy with the result, and I hope your life will be better with this, too!

  1. Simple, of course, only in hindsight. As always, figuring out what I am really looking for when tweaking my Emacs setup is the worst part. For example, I was trying to write a ct/org-agenda-reload-all-buffers that calls revert-buffer for each agenda-managed buffer, and I was prepared to manually sync by hitting the “R” key from the agenda view like a caveman. Now I found a totally different approach. 

Add Navigation Buttons to NSTouchBar

Xcode and Safari sport Touch Bar items to navigate back and forth. Have a close look:

Xcode touch bar screenshot
Xcode’s Touch Bar
Safari touch bar screenshot
Safari’s Touch Bar

When you put two NSTouchBarItems next to each other, there usually is a gap. Between the navigation controls, there is a mere hairline divider, but not the regular fixed-width space.

They are not realized via NSSegmentedControl, though. Compare the navigation buttons with the system-wide controls far to the right: volume, brightness, play/pause. (I’m listening to the 1980s Pop Radio on Apple Music at the moment, in case you’re curious.) The system controls are a NSSegmentedControl. They have rounded corners for the whole control, while the navigation buttons have rounded corners for every button. Also, the navigation buttons have the default button width.

To create separate navigation buttons that close together, you will need to put them in a wrapper view and reduce the buttons’s vertical spacing. A NSStackView, it turns out, works just fine.

I also suspect each app is using its own set of images instead of the system provided ones since the chevrons are of different sizes in Xcode and Safari.

Here’s some sample code that works:

import Cocoa

@available(OSX 10.12.2, *)
fileprivate extension NSTouchBar.CustomizationIdentifier {
    static let mainTouchBar = NSTouchBar.CustomizationIdentifier("the-touchbar")

@available(OSX 10.12.2, *)
fileprivate extension NSTouchBarItem.Identifier {
    static let navGroup = NSTouchBarItem.Identifier("the-touchbar.nav-group")

class WindowController: NSWindowController, NSTouchBarDelegate {

    override func makeTouchBar() -> NSTouchBar? {
        let touchBar = NSTouchBar()
        touchBar.delegate = self
        touchBar.customizationIdentifier = .mainTouchBar
        touchBar.defaultItemIdentifiers = [.navGroup]
        return touchBar

    func touchBar(_ touchBar: NSTouchBar, makeItemForIdentifier identifier: NSTouchBarItem.Identifier) -> NSTouchBarItem? {
        switch identifier {
        case .navGroup:
            let item = NSCustomTouchBarItem(identifier: identifier)
            let leftButton = NSButton(
                image: NSImage(named: NSImage.touchBarGoBackTemplateName)!, 
                target: nil, 
                action: #selector(goBack(_:)))
            let rightButton = NSButton(
                image: NSImage(named: NSImage.touchBarGoForwardTemplateName)!, 
                target: nil, 
                action: #selector(goForward(_:)))
            let stackView = NSStackView(views: [leftButton, rightButton])
            stackView.spacing = 1
            item.view = stackView
            return item

            return nil

    @IBAction func goBack(_ sender: Any) { }
    @IBAction func goForward(_ sender: Any) { }

Here I use built-in touch bar image names, like NSImage.Name.touchBarGoForwardTemplate and touchBarGoBackTemplate.

You can also use your own vector-based @2x images. There’s no benefit over using the built-in ones, really. Just in case, I whipped up two images in my favorite image editor, Acorn, and exported them as PDF.:

Touch bar screenshot with my buttons
This is what my button images look like now. (Click to download.)

Download PDFs

In case you wondered: You can take screenshots of the Touch Bar by hitting ++6.

Replacing More Reference Objects with Value Types

I was removing quite a few protocols and classes lately. Turns out, I like what’s left.

I relied on classes for example because they can be subclassed as mocks in tests. Protocols provide a similar flexibility. But over the last 2 years, the behavior I was testing shrunk to simple value transformations.

Take a ChangeFileName service, for example. It is used by another service during mass renaming of files. It is also used when the user renames a single file. The updatedFilename(original:change:) method is the only interesting part of it. You call this method from other objects and verify this behavior in your tests. Then, one day, you look at the method body of updatedFilename again and find it can be shortened to:

func updatedFilename(original: Filename, change: FilenameChange) -> Filename {
    return original.changed(change)

That’s similar to what I was looking at. Filename and FilenameChange are value types (structs) and their transformations are well-tested. The updatedFilename method only makes this call mock-able. But what for? You can change the tests just as well!

This is a typical object-based test with mocks and assertions on transformations:

  1. Is the other object called? (didUpdateFilename)
  2. Is the other object’s result used? (testUpdatedFilename)
/// The test double/mock object
class ChangeFileNameDouble: ChangeFileName {
    var testUpdatedFilename: Filename = Filename("irrelevant")
    var didUpdateFilename: (original: Filename, change: FilenameChange)?
    override func updatedFilename(original: Filename, change: FilenameChange) -> Filename {
        didUpdateFilename = (original, change)
        return testUpdatedFilename

func testMassRenaming_UpdatesFilename() {
    // Given
    let changeFileNameDouble = ChangeFileNameDouble()
    let service = MassFileRenaming(changeFileName: changeFileNameDouble)
    /// The expected result
    let newFilename = Filename("after the rename")
    changeFileNameDouble.testUpdatedFilename = newFilename
    /// Input parameters
    let filename = Filename("a filename")
    let change = Change.fileExtension(to: "txt")

    // When
    let result = service.massRename(filenames: [filename], change: change)

    // Then
    if let values = changeFileNameDouble.didUpdateFilename {
        XCTAssertEqual(values.original, filename)
       XCTAssertEqual(values.change, change)
    XCTAssertEqual(result, [newFilename])

It’s a bit harder to test for an array with 2+ elements, because then you have to collect an array of didUpdateFilename and prepare an array for testUpdatedFilename that’s to be used by the MassFileRenaming service.

All you test is the contract between MassFileRenaming and ChangeFileName. Which is a lot, don’t take me wrong! But then you also have to test ChangeFileName itself to make sure it does the right thing and doesn’t just return some bogus values once you stop using a mock.

This test case can be changed to the mere value transformation itself without the “class behavior wrapper”. And since you don’t rely on a mock, it’s easier to test collections of values:

func testMassRenaming_UpdatesFilename() {
    // Given
    let service = MassFileRenaming()
    /// Input parameters
    let filenames = [
        Filename("first filename"),
        Filename("another filename"),
        Filename("third filename")
    let change = Change.fileExtension(to: "txt")

    // When
    let result = service.massRename(filenames: , change: change)

    // Then
    let expectedFilenames = { $0.changed(change) }
    XCTAssertEqual(result, expectedFilenames)

But isn’t this an integration test now?” Well, as always, it depends on your definition of a “unit”! You don’t write adapters and wrappers for all UIKit/AppKit method calls, either, and treat the Foundation.URL, Int, or String as if there’s nothing to worry about.

The Filename type, in this example, is reliable, too. If it’s in another module, like MyAppCoreObjects, and thus even more isolated from your other app code, wouldn’t you treat it the same as URL?

I still have to rethink some things with regard to value types. But one nice thing is that you can use self-created value types like any built-in types. No need to wrap in a class/reference object.

Maybe I need more time to adjust to this kind of thinking. I do know that the talk by Ken Scambler I mention time and again was very helpful to make the transition with confidence, like when I began changing RxSwift view models to free functions.

Do Not Apply Code Heuristics When You Need a Broader Perspective

You can only improve things inside the frame you pick. If your frame is too narrow for the problem you try to solve, you cannot properly take everything into perspective. That’s a trivial statement as it’s only re-stating the same thing, but it’s worth stressing.

Apply this to code.

If you focus on code heuristics to improve your code base, you cannot improve the structure of your program. Even though the structure is manifested as code, it’s not code you should be thinking about. It’s concepts. Code is just the textual representation you stare at all day. Structure is what the imaginary entities of your invention bring forth.

What are “code heuristics”?

In general terms, it’s thinking inside the terms of the solution space or implementation: Functions, classes, libraries. An opposite of the solution space is the problem space or problem domain. Your code to maintain air traffic contains methods that are called; the problem is about airplanes and collision courses. Your implementation is a translation of the problem domain, of what real people talk about, into machine-readable commands; that’s the translation from problem to solution space.

For example, the “Don’t Repeat Yourself” principle (DRY) is a code heuristic. It states that you should avoid pasting the same code all over the place. Instead, the DRY principle advises you to extract similar code into a shared code path, e.g. a function or method that can be reused.

Please note that “extract” is also a common refactoring term and “reuse” generally has a positive connotation. Doesn’t mean it’s a good idea to follow it blindly. Because when all you know are code-level tricks and heuristics to improve the plain text files, you sub-optimize for code. This endangers the problem–solution-mapping as a whole.

especially : optimization of a part of a system or an organization rather than of the system or organization as a whole

You endanger the solution you are trying to improve when your criteria of “better” and “worse” are only relative to the level of code – like “do these parts of code look the same?”

One problem with following DRY blindly is you end up with extracting similar-looking code instead of encapsulating code that means or intends the same. This is sometimes called “coincidental cohesion”. You wind up reducing code duplication for the sake of reducing code duplication.

If you confuse mere repetition of the same characters in your source text files with indenting the same outcome, you can end up with 1 single function that you cannot change anymore without affecting all the wrong parts of the program. This becomes a problem when you need to adjust the behavior in one place but not another. You cannot adjust this if the code is shared. (If you’re lucky, you duplicate the code again; if you’re unlucky, you change the shared code and break another part of the app.)

Take this code:

let x = 19 * 2
let y = 19 * 4

A DRYer example:

func nineteenify(num: Int) -> Int {
    return num * 19

let x = nineteenify(2)
let y = nineteenify(4)

But what if one of the 19 are about the German VAT and Germany’s tax legislation changes the rate to 12, and the other 19 is just your lucky number used for computing an object hash, or whatever? (Both similar-looking “magic numbers” represent concepts, albeit different ones in this case!)

Another code heuristic is called symmetry. The lines that make up your methods should look similar to increase readability.

Take the following code for example. You are not satisfied with it anymore because each line looks different than all the others:

func eatBanana(banana: Banana) {
    let peeledBanana = banana.peeled
    guard isEdible(peeledBanana) else { return }

To improve symmetry of the code and thus increase readability, you figure that this sequence is essentially about a transformation in 3 steps. Also, you like custom operators for their brevity and how code reads with the bind or apply operator: |>. Think of it as applying filters here. You increased terseness to reduce noise and end up with this

func eatBanana(banana: Banana) {
  self.prepare(banana: banana)
    |> self.checkEdibility
    |> self.assimilateNutrients

// Extracted:

private func prepare(banana: Banana) -> PeeledBanana? {
    return banana.peeled

private func checkEdibility(peeledBanana: PeeledBanana) -> PeeledBanana? {
    guard isEdible(peeledBanana) else { return nil }
    return peeledBanana

Looks better, maybe? If you understand how the operator works, this could be useful to state the intent of eatBanana more clearly as a step of value transformations.

Questions not asked in the meanwhile:

  • What does the rest of the class look like? Are there 100 other methods in the affected type? Does this one method become harder to read but the type becomes harder? (Another code-level heuristic.)
  • Whose responsibility is the preparation of bananas? Do these methods fit together into a single object? Is the receiver of these methods the best or just whatever is accidentally owning eatBanana? Should this be separated into multiple objects with more focused responsibilities instead? If you did that, would the original problem of symmetry vanish already? (This is not a code-level decision anymore.)

I hope this illustrates what a focus on the solution space brings forth. To take your code as a piece of text you can run grep or sed transformations on to improve it may work, but if all you do is extracting similar strings of characters from text files manually, do you ask the right questions?

In a similar fashion, Design Patterns are code templates. They help write code that solves a particularly intricate problem. Design Patterns are complex solutions, but they are confined to the solution space nevertheless. This includes the popular MVVM as well as the Factory or Visitor design pattern.

To implement MVVM, you need a Model type, a View type, and a View Model object/protocol/… that is based on the Model but exposes an Interface tailored to the View.” – This is a recipe for writing code. It does not and it cannot answer the tough question: what is the best abstraction for your model? What components do you need to translate the problems of air traffic control into an iOS app, for example?

Is an Aircraft class a good abstraction?

This question cannot be answered by reading more code. The answer cannot be sped up with recipes.

This question can only be answered by thinking about the problem.

I think part of software architecture is modeling the problem domain, and isolating your translation of the original problem from e.g. UIKit code properly. That’s when architectural heuristics come into play, like “use layers to separate different parts of your app and avoid a mess.” The VIPER pattern is a higher-level code pattern that affects your architecture.

And this is what bugs me about most of the conversation in our community at the moment: the conflation of Design Patterns (code heuristics) with “Application Architecture” (which goes way beyond that). When we keep these concepts separate, we can continue to focus on one at a time and get better. It’s like the DRY problem from above all over again: if your frame is too narrowly focused on visible repetition on your screen, you cannot have the bigger picture in mind.

If we continue to fail to distinguish these concepts, we end up losing the ability to ask some kinds of questions.

And I argue some of these are the really important ones that help your software as a solution to a real-world problem continue to thrive.

Refactoring The Archive’s ReSwift Underpinnings

While I was refactoring my app’s state recently, I noticed that there are virtually no dev logs about The Archive and how I manage state and events in this app. It’s my largest app so far, still the amount of reflection and behind-the-scenes info is pretty sparse.

This post outlines my recent changes to the app’s state. It’s a retelling of the past 5 weeks or so.

ReSwift: Central State and Control Flow

For The Archive, I am using ReSwift to control the flow of information. Every user interaction and every external file change on disk are handled by a service that emit a ReSwift-compatible Action. This action is processed in a central place, the Store. The previous app state plus the incoming action result in a new app state, which is then broadcast to all subscribers to take action. That’s the basic flow.

I am working with ReSwift for years now. I use it in TableFlip, too, for example. I really like how you end up with a state representation of the whole app (or document): every new feature you add results in a state change, triggered by an action. Action names like CreateNote, ChangeFileEncoding and SortResults are very easy to understand when you encounter them in code.

Partitioning the App State of The Archive

It took a couple of days, but I am about to finish a major refactoring to The Archive that would’ve been much more painful if I had used a less orderly approach to state management. Over the last 18 months of development, I changed my mind about what actions do, how to handle some side effects, and also had to solve a bug here and there. Some initial design decisions have proven to be outdated. That’s the way software development works, of course: You iterate and change things, and then refactor to accommodate to new insights.

One such insight was related to my partitioning of the app’s state. Some folks work with a completely flat hierarchy of state; in ReSwift, that means 1 single struct with a ton of properties. I was partitioning the state into various groups and created a hierarchy like this (lots of details omitted):

  • AppState, the root state type
    • SettingsState for global app settings,
    • LicenseState to manage the trial and license key info,
    • PendingFileChanges, a queue I wrote about before,
    • AppRoute, an enum representing if the editor is visible or the initial launch screen;
      • .launch is a case without sub-states, because there’s no state to manage in the launch screen;
      • .main(MainRoute) is a container just like AppState, but for the active editor;
        • WorkspaceContents contains the list of search results for further processing (the model),
        • ControlState groups UI contents like the current editor contents or search result selection (more like a view model, or ”UI Input ports”),
        • CommunicationState contains pending requests by the user/UI, like “select note X” (the opposite of ControlState, in a way; think ”UI output port” or commands).1

Don’t Put Service Objects Into Your App State

I already rearranged some components of MainRoute. Struggling to fix a memory leak, I changed the layout of MainRoute. Previously, MainRoute contained MainRouteState (which grouped all the sub-states mentioned above) plus a ServiceObjects reference. That was a container for classes like DirectoryMonitor, for UI presenters, ultimately referencing NSWindowController and NSViewController objects, and user event handlers. I designed it this way a year ago so I could switch the AppRoute and thus remove all service objects from memory in one swoop.

The rationale for the initial design: initializing the service components with their lifetime closely tied to the active route sounds like the route should ship with the services.

The rationale for the change: service component references don’t belong into the state. This also causes memory problems somewhere down the road in managing subscriptions.

The motivation for my initial design made sense, but is impractical. Cannot recommend.

Put Service Objects in the Layer that Provides the Services

So I extracted the ServiceComponents part from the state completely. Instead of the state module, it now is only known to the outermost layer of the assembled app itself. I put it into a global variable at first and it didn’t cause problems, yet, so there’s no reason to change this now.

Extracting the reference to the service objects wasn’t too hard. I had to flip a couple of things onto their heads, though, because previously the state would ship with its services, which made some things easy, and now I have to obtain them separately from the global variable. Overall, it was an easy change.

Introducing “Workspace” as a Meaningful Partition of State

In the process, I simplified MainRoute and got rid of the MainRouteState container. New ideas popped up, and one of them was to rearrange and rename a couple of MainRoute states. The WorkspaceContents I mentioned above contain what is accessible in the current editor. I like the term “Workspace”. It’s a good name, because it conveys that this is not all data available, but stuff that is currently in use by the user.

That’s when I decided I wanted to give the term “Workspace” more importance. It was better than “main route”, and especially better than “main state”, which is basically one big code-smell.

Then it occurred to me that when I group most MainRoute contents in a Workspace sub-state, I could magically introduce multiple workspaces. In terms of the app, that would be multiple windows or tabs. I only have to identify Workspaces, e.g. using a WorkspaceID, and make sure all user events are routed so they affect the correct workspace only.

The changes affect the main app route only, so this is what it looks like now:

  • .main(MainRoute) is still a container for the active editor window;
    • MainRoute.workspaces: [Workspace] contain a list of all active workspaces (only 1 at the moment),
      • Workspace groups everything that’s contained in an editing session,
        • workspaceID identifies the workspace,
        • WorkspaceContents still contains the list of search results for further processing (the model),
        • ControlState is mostly unchanged (UI input port),
        • CommunicationState also stays the same (UI output port).

Making Actions Workspace-Relative

This change affects how I write ReSwift.Actions.

Think about the in-process action SelectingNote, which ultimately displays the selected note and changes the search result list selection. It is not an app-global action anymore, but relative to a workspace.

To help me during the change, I removed the ReSwift.Action compliance from SelectingNote. This means I cannot send this as an action to the ReSwift.Store for processing directly anymore. The compiler will complain, and I will have to fix all outdated usages. I am using the type system and “lean on the compiler” to avoid making mistakes. But if SelectingNote is not an action anymore, I need to wrap it in an action so I can send the event.

Introducing WorkspaceTargeted<T>:

/// Type marker for the actions of `WorkspaceTargeted`, to be used instead of `ReSwift.Action` so the
/// compiler complains when you try to dispatch them directly.
public protocol WorkspaceTargeting {}

public struct WorkspaceTargeted<T: WorkspaceTargeting>: ReSwift.Action {

    public let wrapped: T
    public let workspaceID: WorkspaceID

    public init(_ action: T, workspaceID: WorkspaceID) {
        self.wrapped = action
        self.workspaceID = workspaceID

extension WorkspaceTargeted: CustomStringConvertible {
    public var description: String {
        return "\(wrapped) @ \(workspaceID)"

extension WorkspaceTargeted: Equatable where T: Equatable {
    public static func ==(lhs: WorkspaceTargeted<T>, rhs: WorkspaceTargeted<T>) -> Bool {
        return lhs.workspaceID == rhs.workspaceID
            && lhs.wrapped == rhs.wrapped

I now send the event by dispatching WorkspaceTargeted(SelectingNotes(...), workspaceID: targetID).

This made me rearrange actions, separating workspace-targeted actions from app global actions. This helps find things in the project. It’s a more useful grouping of code files than I had before.

I had to change about 40 actions to make use of this new distinction. Took a while.

In the process, I also found actions that were doing multiple things, or the wrong thing completely.

For example, when you create a note in the app, that means you create an in-memory Note and write its contents as a file onto the disk. These two processes are marked as completed by dispatching CompletingNoteCreation, which is affecting the workspace, and CompletingFileCreation, which is affecting the file system.

Previously, CompletingFileCreation contained a copy of the Note that was created. This was a code-motivated addition: I needed to know when a particular Note was created so I could display it automatically. Because file creation finishes after the workspace change, I decided to attach the info to the latest action in the sequence.

Last week, I was moving CreatingNote and its CompletingNoteCreation counterpart into the Workspace-targeted actions. The resulting Note should be displayed in the workspace where the action originated. But the object was attached to the file creation action, not the note creation action. That was wrong, because file creation happens in an app-global queue. (PendingFileChanges, remember?) This didn’t cause problems before, but now it wouldn’t work anymore.

I removed the Note reference from CompletingFileCreation and attached it to CompletingNoteCreation. Then I figured, why is the sequence structured this way anyway? If creating a note starts the sequence of events, shouldn’t the completion event come last? If you think about nested events like layers of an onion, my previous sequence broke the layering. I changed the order of events a bit and now have a much neater start–finish pairing.

I probably wouldn’t have noticed this if I hadn’t changed the actions to become workspace-relative. And I wouldn’t have made them workspace-relative if the notion of a “Workspace” as a model term hadn’t appeared.

Again, this is how software development oftentimes works: you discover new concepts (here, a “workspace”) and suddenly the meaning of a lot of things going on in your app change.

The result is a code base that better conveys the intent of your actions, and improves the cohesion of states. It’s easier for me to decide if something should be part of a Workspace than it was to figure out if it affects the bland MainRouteState.

  1. I adopted the naming convention of ControlState and CommunicationState from James Nelson, “The 5 Types Of React Application State”. I had more sub-states inside MainRoute before, but was able to rearrange most of them into control/communication, or UI input/output. Lesson learned: Keep it simple, and don’t use what you don’t need, until you need it. Top-down hierarchies always bit me, this included, and I always end up having to simplify things later. 

Replace RxSwift View Model Types with Free Functions

Instead of writing view model types that expose the input and/or output ports as properties, consider writing free functions that are the transformation.

This reminds me of Ken Scambler’s proposal to get rid of test doubles through side-effect free function composition. One of his examples, simplified, goes like this: Instead of testing what, wallet) does in your tests by mocking the wallet object, you rewrite Wallet as a value object and change pay to return the effect.


class Wallet {
    private var cash: Int

    func takeOut(amount: Int) { -= amount

extension Customer {
    func pay(amount: Int, wallet: Wallet) {
        // Need to mock `wallet` reference type in your tests
        wallet.takeOut(amount: amount)


struct Wallet {
    var cash: Int

func pay(amount: Int, wallet: Wallet) -> Wallet {
    var result = wallet -= amount
    return result

This way, the original Wallet isn’t affected and you can test the payment action in isolation. Ken’s examples are more sophisticated than that, but it should get the idea across.

Now DJ Mitchell proposes a similar approach to RxSwift view models: make them mere transformations of one set of observables into another, e.g. of this form: (Observable<T1>, Observable<T2>, ...) -> (Observable<U1>, Observable<U2> ...)

Usually, in your view code, you end up with PublishSubject/PublishRelay as to generate output events that you expose as Observable. This dance is a well-established pattern to bridge between the non-reactive and reactive world. It’s useful to expose an OOP message input (e.g. accept changeUserName(_:) messages) and expose a reactive observable sequence output (e.g. userName: Observable<String>).

But when you write reactive code, you can end up with input events as observable event sequences anyway. There’s no need to use non-reactive messages in some cases.

This is where direct transformations kick in.

Look at the example by DJ:

func loginViewModel(
    usernameChanged: Observable<String>,
    passwordChanged: Observable<String>,
    loginTapped: Observable<Void>
) -> (
    loginButtonEnabled: Observable<Bool>,
    showSuccessMessage: Observable<String>
) {
    let loginButtonEnabled = Observable.combineLatest(
    ) { username, password in
        !username.isEmpty && !password.isEmpty

    let showSuccessMessage = loginTapped
        .map { "Login Successful!" }

    return (

You have 3 input streams and return a tuple of 2 output streams. That pretty easy to test thorougly thanks to RxTest and its timed event recording. And it reduces the actual amount of ugly RxSwift binding code.

You can get from a lot of input/output bindings like this:

    // Inputs
        .subscribe(onNext: viewModel.inputs.loginTapped),

        .subscribe(onNext: viewModel.inputs.usernameChanged),

        .subscribe(onNext: viewModel.inputs.passwordChanged),

    // Outputs
        .bind(to: loginButton.rx.isEnabled),

        .subscribe(onNext: { message in
            // Show some alert here

To a transformation call and a lot less bindings here:

let (
) = loginViewModel(
    usernameChanged: usernameTextField.rx.text.orEmpty.asObservable(),
    passwordChanged: passwordTextField.rx.text.orEmpty.asObservable(),
    loginTapped: loginButton.rx.tap.asObservable()

        .bind(to: loginButton.rx.isEnabled),

        .subscribe(onNext: { message in
            // Show some alert here

I’m constantly refactoring my RxSwift-based code as I learn new things. I found view model types to be pretty useless in the long run. Writing tests for them is cumbersome because they are the transformations and expose outputs as properties, accepting inputs in the initializer. I end up with lots of object setup boilerplate. They don’t need to be actual objects since they expose to actual behavior. They are already mere value transformations, only wrapped in a type.

I guess this is a natural step when you start with OOP in mind and think about sensible encapsulations in types. But sometimes, like in DJ’s example, free functions work just as well to express your value transformation intent.

Update 2019-03-08: I continued this line of thought and refactoring in Replacing More Reference Objects with Value Types.

Programmatically Add Tabs to NSWindows without NSDocument

The Cocoa/AppKit documentation is very sparse when it comes to tabbing. You can make use of the native window tabbing introduced in macOS Sierra with a few simple method calls, though.

Conceptually, this is what you will need to do:

  • Set NSWindow.tabbingMode to .preferred so the tab bar will always be visible.
  • It suffices to call NSWindow.addTabbedWindow(_:ordered:) to add a window to the native tab bar and get everything tabs do for free.
  • Once you put NSResponder.newWindowForTab(_:) into the responder chain of the main window, the “+” button in the tab bar will be visible.

However, there are some caveats when implementing these methods naively. The plus button may stop working (no new tabs are added when you click it) and all default shortcuts are broken, their main menu items greyed out.

How to Implement newWindowForTab

First, where to add @IBAction override func newWindowForTab(_ sender: Any?)? That’ll be the event handler to create new tabs.

  • If you use Storyboards, then put this into a NSWindowController subclass you own. That’s the simplest way to get to an NSWindow to call addTabbedWindow for.
  • If you use Xibs, the AppDelegate will have a reference to the main window. You can put the method here.
  • If you use a programmatic setup, put it wherever you know the main NSWindow instance.

We’ll stick to NSWindowController for the rest of this post, no matter how you create it:

class WindowController: NSWindowController {
    @IBAction override func newWindowForTab(_ sender: Any?) {
        // Implementing this will display the button already

How to Call addTabbedWindow

Once you have newWindowForTab(_:) in place, add functionality to it: create a new NSWindow and add it to the tab bar.

  • If you use Storyboards, grab the instance via NSWindowController.storyboard; then instantiate a new window controller instance, for example using self.storyboard!.instantiateInitialController() as! WindowController.
  • If you use Xibs with a NSWindowController as the File’s Owner, create an identical window controller using NSWindowController.init(windowNibName:). Use its window property, discard the controller.
  • If you use Xibs with a NSWindow only and no controller, get the window from there.
  • If you use programmatic setup, well, instantiate a new object of the same window type as usual.

When you have the new window object, you can call addTabbedWindow. Using the Storyboard approach, for example, turns the implementation into this:

class WindowController: NSWindowController {
    @IBAction override func newWindowForTab(_ sender: Any?) {
        let newWindowController = self.storyboard!.instantiateInitialController() as! WindowController
        let newWindow = newWindowController.window!
        self.window!.addTabbedWindow(newWindow, ordered: .above)

Fix the “+” Button and Main Menu

TL;DR: When you initialize a new window, set window.windowController = self to make sure the new tab forwards the responder chain messages just like the initial window.

Take into account how events are dispatched. Main Menu messages are sent down the responder chain, and so is newWindowForTab. NSApp.sendAction will fail for standard events if the source of the call doesn’t connect up all the way – that means, at least up to your NSWindowController, maybe even up to your AppDelegate.

You have to make sure any additional window you add is, in fact, part of the same responder chain as the original window, or else the menu items will stop working (and be greyed-out/disabled). Similarly, the “+” button stops to work when you click on it.

If you forget to do this and run the code from above, it seems you can’t create more than two tabs. That’s the observation, but it’s not an explanation. You can always create more tabs, but only from the original window/tab, not the new one; that’s because the other tab is not responding to newWindowForTab.

Remember: “The other tab” itself is just an NSWindow. Your newWindowForTab implementation resides in the controller, though. That’s up one level.

class WindowController: NSWindowController {
    @IBAction override func newWindowForTab(_ sender: Any?) {
        let newWindowController = self.storyboard!.instantiateInitialController() as! WindowController
        let newWindow = newWindowController.window!

        // Add this line:
        newWindow.windowController = self

        self.window!.addTabbedWindow(newWindow, ordered: .above)

Now newWindow will have a nextResponder. This will fix message forwarding.

Using Multiple Window Controllers

The solution above shows how to add multiple windows of the same kind, reusing a single window controller for all of them.

You can move up newWindowForTab one level to another service object, say the AppDelegate. Then you could manage instances of NSWindowController instead of instances of NSWindow. I don’t see why you would want to do that if you can share a single controller object.

I haven’t tried to do anything fancy besides, but you should be able to use different windows and different window controllers and group them in the tab bar of a single host window. You will then need to keep the window controller instances around, too.

NSAppearance Change Notifications for Dark Mode and RxSwift Subscriptions

Last year, I had a full-time job from May until November and haven’t had much time to prepare for Mojave. Then it hit hard, and I am still tweaking my apps to look properly in Mojave’s Dark Mode. I welcome that macOS offers two modes now, .aqua and .darkAqua in programmer parlance. Because then I don’t have to implement a homebrew solution in all my writing applications.

One troublesome change is NSColors adaptability: system colors like NSColor.textBackground (white in light mode, jet black in dark mode) or NSColor.gridColor (an 80%ish grey color in light mode, a transparent 30%ish gray in dark mode) are very useful, but some custom interface elements need different colors to stand out visually. But you cannot create adaptive NSColor objects programmatically, only via Asset Catalogs. And these don’t even work on macOS 10.11. So you limit yourself to macOS 10.13 High Sierra and newer if you use Asset Catalogs at all, even though only the light variant will be exposed there. Increasing the deployment target from 10.10 to 10.13 just for the sake of easy color changes for 10.14’s dark mode is lazy. (If you have other reasons, go for it! It’s always nice to drop backwards compatibility and use the new and cool stuff where possible.)

So what options do you have?

First, check out Daniel Jalkut’s excellent series on dark mode changes. He covers a lot of ground. Chances are you don’t need to know more than what he explains in his posts.

But how do you react to changes to the appearance? Daniel uses KVO on NSApp.effectiveAppearance, and I had mixed results with observing changes to this attribute. Also, it’s macOS 10.14+ only.

Still I want a NSNotification. Since macOS 10.9, you have NSAppearanceCustomization.effectiveAppearance, which NSView implements. So even if you cannot ask your NSApplication instance for its effective appearance, you can ask your window’s main contentView. That’s what we’ll do.

If you want to see all code, take a look at the accomanying GitHub Gist.

NSAppearance Convenience Code

First, a simple boolean indicator isDarkMode would be nice:

// Adapted from
extension NSAppearance {
    public var isDarkMode: Bool {
        if #available(macOS 10.14, *) {
            if self.bestMatch(from: [.darkAqua, .aqua]) == .darkAqua {
                return true
            } else {
                return false
        } else {
            return false

Now either use KVO on your main view, or expose a reactive stream of value changes.

My latest app, The Archive, uses RxSwift and other reactive patterns wherever possible. I found this to be a very nice way to decouple parts of my program, so I wanted to stick to that approach. RxSwift adds “Reactive eXtensions” to types in the .rx namespace of objects. That means I want to end up using window.contentView.rx.isDarkMode and get an Observable<Bool> instead of the momentary value. Notice the “rx” in the middle: the idea is that your own extensions use the same attribute names but inside the reactive extension to wrap changes in an Observable sequence.

Here, look at this. You write your own reactive extension for NSView in extension Reactive where Base: NSView { ... }, like that:

import Cocoa
import RxSwift
import RxCocoa

extension Reactive where Base: NSView {

    /// Exposes NSView.effectiveAppearance value changes as a
    /// never-ending stream on the main queue.
    var effectiveAppearance: ControlEvent<NSAppearance> {
        let source = base.rx
            // KVO observation on the "effectiveAppearance" key-path:
            .observe(NSAppearance.self, "effectiveAppearance", options: [.new])
            // Get rid of the optionality by supplementing a fallback value:
            .map { $0 ?? NSAppearance.current }
        return ControlEvent(event: source)

    /// An observable sequence of changes to the view's appearance's dark mode.
    var isDarkMode: ControlEvent<Bool> {
        return base.rx.effectiveAppearance
            .map { $0.isDarkMode }

Now I can subscribe to any of the window’s view changing.

Post Notifications from Main Window Changes

As I said, I want to have notifications, e.g. in places where there is no associated view or view controller. That’s where the settings are wired up and the current editor theme switches from light to dark or vice-versa in accordance with the app’s appearance.

First, I define some convenience constants for the notification sending:

extension NSView {
    // Used as
    static var darkModeChangeNotification: Notification.Name { return .init(rawValue: "NSAppearance did change") }

extension NSAppearance {
    // Used in Notification.userInfo
    static var isDarkModeKey: String { return "isDarkMode" }

To post the notification, I have to subscribe to any view in the view hierarchy and ask for it’s observable stream of dark mode changes:

    .subscribe { isDarkMode in
            name: NSView.darkModeChangeNotification,
            object: nil,
            userInfo: [NSAppearance.isDarkModeKey : isDarkMode])
    }.disposed(by: disposeBag) // Assuming you have one around

It’s important to note that the view has to be in the view hierarchy (and, I think, visible) to be updated at all. I tried observing a stand-alone NSView() in a service object that had no knowledge about the app’s UI, but to no avail. Doesn’t work. So I resort to the main window’s contentView which will be around while the app is running.

Subscribe to the notification as usual.

Or employ RxSwift’s NotificationCenter extensions: `swift

let darkModeChange = NotificationCenter.default.rx .notification(NSView.darkModeChangeNotification) .map { $0.userInfo?[NSAppearance.isDarkModeKey] as? Bool } // Equivalent to a filterNil(): ignore incomplete notifications .filter { $0 != nil }.map { $0! }

darkModeChange .subscribe(onNext: { print(“Dark mode change to”, $0) } .disposed(by: disposeBag) `

So that’s what I use and I’m content with the solution for now. I can switch colors manually where needed and reload the editor theme for the app’s current appearance.

Thanks again to Daniel Jalkut and his research!

To see all the code in one place, have a look at the GitHub Gist that goes with this post.

→ Blog Archive