Finding the Field Editor in UI Tests

On macOS, the field editor is a NSText instance that appears whenever you edit a NSTextField. This means the text fields themselves offer no editing features; they just tell the shared field editor to appear in their drawing area and show their content.

When you write XCUITests, you may want to edit cells in a table or fill out a form with many text fields. Today I learned that you don’t get to the field editor in UI tests and send it the typeText message. You work with the text fields like the user does: as if they themselves accepted user input.

So there’s no reason to filter XCUIApplication().textViews and search for the field editor. Stick to textFields.element(matching: NSPredicate(format: "hasKeyboardFocus == true")) and you’re good. Note that it’s not hasFocus. The XCUIElementAttributes doc seems to suggest the element objects don’t respond to much more than what’s listed here, but that’s not the case.

let fieldEditor = XCUIApplication()
        NSPredicate(format: "hasKeyboardFocus == true"))

I found this especially puzzling when editing NSTableCellView contents. Then again, the cell labels are NSTextFields, but with all the drawing attributes of a label. Think different.

TL;DR: There’s no point in searching for the field editor. All XCUIElementSnapshot objects that you can get to will not respond to NSText.isFieldEditor anyway. Just look for a text field that has focus.

Being Afraid to Change a Working App

Today I work on The Archive. The focus is on an issue brought up by the community. Search results don’t update the sort order when you modify a note unless you refresh manually.

In fact, the issue is expected behavior. The Archive, being a note-taking app where you can filter notes on disk with a live search, is designed to not update the search results for an active search term. Att all. This should prevent the note from disappearing from the results if you remove the search term from its contents. If you search for “foo” and get 10 results, the note you currently edit should not disappear when you cut the search term, “foo”, from it. The Archive protects the search results; a mere live-reload would change the list to 9 results, removing the currently edited one, and that’d be pretty confusing.

The Archive does not just display search results like a Smart Search in Finder. The visible workspace contents are supposed to stay the same until you change them deliberately. Removing a note from the app is a deliverate change. Notice my choice of words here: “workspace contents” instead of “search results”. When you work on stuff in your note archive, imagine putting a bunch of paper notes on your desk. It’d be odd if someone else took away some the notes you wanted to continue to work with. In some sense, the workspace contents of The Archive are more like a manual selection of things you can look at than they are just search results. The means of getting to the selection of notes you focus on is via search, but that’s rather accidental.

The issue I linked to pertains updating the sort order of the workspace contents, not updating the contents. Technically, re-sorting the notes is a change to the workspace contents that don’t work at the moment. From a user’s perspective, sorting the workspace by modification date should make the last modified note in the workspace bubble up to the top. This expectation makes sense.

Implementing a “fix” for the current behavior means to change the protection of the workspace contents. Now, it’s close to a binary decision: does the user see all notes, unfiltered, or does the workspace contain a careful selection that should be protected? Allowing the workspace contents to re-sort based on file system changes is not tricky, but it might affect more use cases than I want.

Which brings me to the titular problem: sometimes, I’m afraid to change parts of my app because I cannot know all the (unintended) consequences. The app’s behavior is way too complex for this.

I deal with this problem in two ways:

  1. I write short lists of intended behavior for intricate use cases to make sure I don’t mix things up in my head 2 months from now.
  2. I write UI tests that remote-control the application and assert it behaves as expected.

UI tests are regression tests. They make sure I don’t break use cases, or complicate user interactions that would be tiresome to exercise manually hundreds of times.

But UI tests only make sure I don’t break behavior that is actually tested. I can still break things that I didn’t think about. I’m inclined to write tests to orchestrate external file changes, user-originated note editing, and the effects on the UI. Replicating this manually is too cumbersome. But I don’t want to write UI tests as thoroughly as I write unit tests. Testing every branch in your app using UI tests, or integration tests in general, is virtually impossible. There are too many cases. You have to focus on a selection of interaction patterns.

Or, as I quoted J.B. Rainsberger before, “Integration tests are a scam.”

Still, they are useful to make sure that I can test parts of the app that are not yet easily unit-testable, like asynchronous file change timing issues.

If I wrote my apps in a purely functional manner, say with Haskell, I probably wouldn’t have these problems, but also wouldn’t have the app in a working condition. It’s a pragmatic trade-off.

In the end, all I can do is my best to ensure I don’t break things in a bad way. But I will eventually break things to some degree. That’s an inevitable part of the development process. Like now, I’m even deliberately breaking the protection of workspace contents in my app, changing the perceived behavior. It’s hard to tell what’s broken and what’s a change sometimes, because the judgement depends on the interaction of a user with the software, not just the app as a static entity.

TableFlip Is Now Available on the Mac App Store

TableFlip is now available on the Mac App Store!

I also released updates to the non-Mac App Store version that fix CSV editing problems and improve the user interface. Of course both versions have the same features, so you’re not missing out on anything if you only own one version.

The app store page is a feast for the eyes. There’s a demo video (I already know how I can improve it a lot with the next update), Zebras, and lovely icy mountains.

Please share the news to help the app get traction. That would be super helpful!

Using Drag and Drop with NSTableView

Nate Thompson compiled a tutorial on how to implement drag & drop in NSTableView. It’s a good read.

I remember how weird it felt to implement this the first time. Drag & drop is actually realized via the pasteboard. So it’s more like cut and paste with a visual representation. From this you get the ability to put multiple content representations into the pasteboard at once, so the drop container can decide how to handle whatever it receives.

Hosting Downloads on Amazon S3 with CloudFront

Since early 2019, I host downloads for my app The Archive on Amazon’s servers. The S3 bucket is a cheap-enough storage of the zip files, and the CloudFront cache is a content distribution network across the globe that improve download speeds.

Here’s a long tutorial, because I will most likely forget how I did all this in a while, and chances are you don’t know how to do this either.

  • Don’t be afraid. There’s a lot of things you may not now, but they are not complicated. Only very bare-bones.
  • It’s quite fun, actually. When I had the setup running, I felt like a cloud computing wizard. With this, you could set up websites with fast response times around the globe and scale easily!

Backstory and End Result

A couple of years ago, I dabbled with Amazon S3 storage to host downloads of my macOS apps.

One lesson I learned: do not point the app’s update feed setting to your S3 bucket. That’s too fragile. Use your own domain and redirect if needed. This way you retain 100% control of where the traffic goes. If you point to S3 directly, you give away part of your control. And when Apple enforces TLS with macOS updates, you end up with users not being able to download from the server you cannot control. Dang.

But hosting website downloads on AWS S3 still works fine. Add CloudFront on top of it, and your downloads will be served from a server closest to the request’s origin. It’s a web cache of sorts, useful for content distribution. Using CloudFront actually reduces traffic cost, so that’s what we’ll be using.

This post is how I configured an Amazon S3 bucket with the modern non-public settings, and then set up CloudFront to actually publish the contents. In this case, I publish an index.html file to prevent directory listings. The rest is .zip files for downloads.

This is the result, using a .htaccess redirect of the download endpoint on my website:

$ curl -I
HTTP/1.1 303 See Other

I’m not an expert in cloud computing, cloud hosting, or cloud anything, really. Amazon S3 was the fanciest thing I used. I once read that you could host static websites in S3 buckets, but it didn’t sound as cost-effective, so I never looked into it any further. But with the knowledge from this post, you will be able to host a static website on S3 with great response times thanks to CloudFront if you want.


You will need an AWS account. Then click around in the console to familiarize yourself with the menu. You will need to switch between S3 and CloudFront, so try to locate the items in the huge services list, or use the search.

We will be creating a new bucket to host files from. If you don’t know anything about any of this and want to play around with S3 first, follow along the instructions from the manual:

  1. Sign up for Amazon S3
  2. Create a Bucket
  3. Follow the other steps for uploading and looking at files. Once you have a bucket, you can play around with the bucket “file” browser and upload stuff, click on the file, and view the absolute URL for reference.

Chances are you won’t be able to download files or view HTML pages from S3 buckets because public read access is not enabled. And granting public read access is discouraged nowadays, as we’ll see shortly.

Amazon Access Rights

The access right levels for S3 directories are:

  • List objects”,
  • Write objects”,
  • Read object permissions”,
  • Write object permissions”.

For files, read access is called “Read object”.

You will not want to give anyone access to file or directory permission settings. “List objects” and “Read object” settings are most interesting.

Naively, I was granting read access (“List objects” and “Read object”) access to “Everyone” when I began hosting downloads on Amazon S3. But it turns out that generating lots of traffic with direct downloads from S3 buckets is more costly than using the CloudFront distribution service. So it pays off to enable the CloudFront CDN to cache files in multiple data centers around the world, backed by a single S3 bucket as the repository.

CloudFront being a cache means changes to the repository will not be visible immediately. Keep that in mind when you experiment with settings. You will need to reset the CloudFront cache in these cases to force re-caching. We’ll get to that later down the road.

Public read access is considered a bad practice since IT companies apparently leaked private data this way. CloudFront is supposed to provide another layer around this and safeguard against putting private files into public folders.

You will want to keep all access levels enabled for your own AWS account, of course.

So here’s how to use Amazon S3 to host files (or a static website) and offer download links using the CloudFront content distribution network.

Set up File Storage

First, you need an Amazon S3 Bucket to upload files.

Create a New Bucket

Navigate to S3 in the the AWS Console:

There, select “Create Bucket” and enter a readable ID for the bucket name. Its domain will be something like, so you maybe want to keep this recognizable. During the permissions setup step, keep the public access blocking enabled. This will make AWS complain when you try to change a file to be publicly visible.

It’s supposed to be for your protection, and we don’t need direct S3 access in a minute anyway, so stick with this.

Change an Existing Bucket for New Access Management

I had an existing bucket, so here’s what I did.

The admin URL for your bucket is You may want to keep this open in a separate browser tab.

Go to your S3 bucket’s permissions. (Select S3 from the Services menu; then select your bucket from the list; then select the “Permissions” tab.)

Select the “Access Control List” permission setting page. (Yeah, another layer of tabs below tabs. I get lost in the AWS console a lot.) Remove “List objects” access for everybody but the owner.

Select the “Public access settings” permission setting page and edit the resulting settings. Enable all checkboxes to block future uploads from being made public, and retroactively removing public access settings from all available files.

Now the S3 bucket contents cannot be accessed via direct links anymore. Next, we set up the CloudFront CDN to provide the actual downloads.

Set up CloudFront for Distribution

With the bucket’s content being sufficiently protected via private read-only access, we can add the CloudFront layer to manage actual publication of our contents.

Navigate to the CloudFront service in the AWS console:

This is a two-step process: you will create a CloudFront distribution (resulting in a public URL) and a CloudFront user that will have read access to your files.

I think this is similar to local web server configurations where your site content is generally protected in your user’s home folder, but the Apache web server can read and show the data.

To save yourself some manual setup steps, we start with the user setup because then CloudFront does most of the grunt work.

Create a CloudFront “User”

To grant CloudFront read access to your bucket, you have to link the two services. This works by creating a CloudFront user, so to speak.

With CloudFront still open, in the menu to the left select Origin Access Identity (OAI) below the “Security” sub-heading. Create a so-called OAI, which will create one such user for you.

The created OAI consists of a short ID and a long canonical user name hash. You need the latter to grant access to individual files. You need the former for bucket policies later on, though. So keep both handy.

I entered “The Archive downloads” as a comment for the ID to identify it later.

Create a CloudFront Distribution

Select “Create Distribution”, then “Get Started” for a Web Distribution.

Origin Name is your S3 bucket. Click inside the field for suggestions.

Origin Path can be left empty when you want to publicize the whole bucket. In fact, I used a public sub-directory.

Now make sure to choose Restrict Bucket Access (Yes) and then, for Origin Access Identity, pick “Use an Existing Identity”. In the drop-down, select the CloudFront user you just created. (If you edit an existing distribution, this setting will be tied to items in your “Origin” tab, not in the general distribution settings.)

For your convenience, also pick “Yes, Update Bucket Policy” for Grant Read Permissions on Bucket. That’ll create the policies for you, which is nice.

I left most other settings at their defaults. HTTPS is nice, and I don’t need to forward query parameters for downloads, so no need to change any of these.

I did change Default Root Object to index.html so I can add a simple HTML file with a link back to the main website in case someone copies the download link and tries to browse the directory.

Create the distribution.

You will need to wait a couple of minutes (about 15 in my case) until the distribution is ready and not “In Progress” anymore. Then you can access the S3 bucket files using the new CloudFront URL aka domain name.

How You Could Manage CloudFront Access Yourself

Per-File Setup

Go to your S3 bucket’s permissions. (Select S3 from the Services menu; then select your bucket from the list; then select the “Permissions” tab.)

Select the “Access Control List” permission setting page. (Yeah, another layer of tabs below tabs. I get lost in the AWS console a lot.)

Below the Access for other AWS accounts heading, select “Add Account”. Paste the CloudFront canonical user ID you just generated and tick “List Objects”. Now your CloudFront has read access to your bucket and can list all objects.

Repeat this step for every file you want CloudFront to access.

For changing directories of stuff, like my app updates, I rather not rely on a manual process and use bucket policies instead. Think of them like regular expressions for paths to apply access rights.

CloudFront Access Rule in a S3 Bucket Policy

Look at your S3 Bucket Policy. (Select S3 from the Services menu; then select your bucket from the list; then select the “Permissions” tab; then select the “Bucket Policy” sub-tab.)

If CloudFront finished its set-up work, you will see an active policy already.

If not, or if you skipped the generation step above, here’s what the policy looks like.

You can use the bucket policy generator to some extent. The hardest part for me was to find out how to specify the CloudFront user ID. StackOverflow to the rescue and reading the docs, the following template is what I ended up using. Note that the Id and Sid are human readable strings with a unique timestamp so I avoid collisons in the future:

    "Version": "2012-10-17",
    "Id": "CloudFrontAccess20190224100906",
    "Statement": [
            "Sid": "CloudFrontReadAccess20190224100921",
            "Effect": "Allow",
            "Principal": {
                "CanonicalUser": "CANONICAL_USER_ID"
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::BUCKETNAME/*"

Adapt the template to your settings:

  • Replace BUCKETNAME with your bucket name;
  • and replace CANONICAL_USER_ID with the long OAI canonical user ID from earlier.

The s3:GetObject action is the read content/list files access right. You can come up with your own Statement ID (Sid) and policy ID (Id) if you want.

If you save the policy and refresh your browser, you’ll notice that AWS replaced this line:

"CanonicalUser": "CANONICAL_USER_ID"

Instead, it’ll read:

"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity SHORT_CLOUDFRONT_ID"

You can also add additional statements to deny read access to any user except the CloudFront user if you want. Find details in the StackOverflow post. I don’t use this.

How to Refresh CloudFront

If you play around with files and access rights, you may end up with CloudFront serving files that no longer exist, or denying access even though you fixed your permissions.

That’s because CloudFront is a cache. Its cache is not refreshed on every request – that’d be pointless. Instead, you need to tell CloudFront to forget its cached contents.

Head to you CloudFront Distribution list, select your distribution, then go to the Invalidations tab.

Select Create Invalidation and enter a Wildcard, *, to invalidate all cached files. The new invalidation request will show in the list and be “In Progress”. After a minute, my invalidations finish and you can try to access the public URLs again and see if it works now.


Cloud services still feel weird to me. I never know when things correctly update an if the services work together properly. I was probably pasting the download URLs into my web browser a hundred times to test the availability. But I’m happy to have learned that CloudFront caches my download all around the globe: this means faster download times! Also, it is supposed to reduce cost. I’m looking forward to next month’s bill to verify.

The strangest thing about all this is the eventual consistent service setup. You create a service, then another, but things don’t work right away. Changes take time. It takes a couple of minutes to reflect the new synchronized state you were creating. It’s a flexible model, but a bit awkward to debug at first.

Another bonus of adding CloudFront as a layer around my S3 bucket is the use of custom SSL certificates. Apple’s download restrictions (aka “Application Transport Security”), introduced a couple of years ago, made it impossible to download app updates from S3 buckets directly. The shared certificate didn’t meet the restrictions Apple imposed. That was when I changed update hosting to my own web server. Once I configure the CloudFront domain’s certificates properly, I can once again switch to provide downloads via Amazon’s servers an save a couple of Euro cents more, yay!

A few years ago, the AWS console was worse. I couldn’t do anything besides the most basic tasks. It seems the situation has improved a lot. Or I became smarter. Or less afraid to break things, I don’t know. Either way, it was a pretty simple process to set up my own public file download CDN this time.

Fixed the Blog Archive

I was looking for an old post and found that the paginated list of blog posts didn’t quite contain all the posts I expected. Turns out I introduced an off-by-1 bug there, computing pages for 10 posts per page, but working with all_posts.each_slice(10-1), effectively dropping 1 post per page.

Of course my website isn’t unit-tested, so no wonder :)

Sync Emacs Org Agenda Files via Dropbox Without Conflicts

I use beorg on iPad/iPhone to view my Emacs Org files. Until today, I did use it only for viewing files, not edit tasks, because I ran into sync conflicts.

This is how I solved the problem with a few simple1 settings.

Put org files into Dropbox

First, put your org files into the Dropbox. I have all .org files in one directory and use the directory as org-agenda-files:

(setq org-agenda-files (quote ("~/Pending/org/gtd")))

This will automatically expand directory contents using org-agenda-file-regexp, which by default is set to:


So it matches all non-hidden org files, nice.

Prevent sync conflicts

When you sync files, you externally change the files that correspond to buffers that are probably open at all times if you use Emacs org agenda. Once you edit files on your computer, either saving will not work because of conflicts, or you will be asked if you want to read buffer contents from disk again.

One part of avoiding conflicts is to save relevant org files often, the other part is auto-reloading from disk when possible.

Auto-save org buffers to disk

You can enable auto-saving of org files while emacs is running with a hook:

(add-hook 'auto-save-hook 'org-save-all-org-buffers)

That will put auto-saving all open org files on a timer. Performing changes to buffers from the org agenda overview, for example, doesn’t mark the buffer as needing to auto-save, as far as I understand. So this setting helps to auto-save all org buffers regularly.

Auto-reload buffers when files on disk change

Then you can enable auto-reloading from disk when you change files in the Dropbox.

I am not comfortable having global-auto-revert-mode turned on, I’d rather limit this to my task lists. But if this is cool with you, just put (global-auto-revert-mode t) into your init.el file.

If you’re like me and want to set only the task-related org buffers to auto-reload, you need to enable auto-revert-mode minor mode for these buffers. That will reload the buffer when the file changes like the global variant, but limited to a particular buffer.

Do not set this manually. There’s a better way: auto-executing commands when files are loaded.

Emacs will execute whatever you put into a .dir-locals.el as soon as you open (aka “find”) a file from that directory. This can be used to enable auto-revert-mode for all org files in your Dropbox.

In my case, the path would be ~/Pending/org/gtd/.dir-locals.el, and this is what I put inside:

; Enable auto-revert-mode for buffers from files from this directory:
((nil . ((eval . (auto-revert-mode 1)))))


Now when I open my org agenda, all org-agenda-files will be loaded to populate the agenda – that is all contents from this directory. Each and every corresponding buffer will have auto-reload-mode enabled, and thanks to the global settings, will save regularly to disk.

This isn’t bullet-proof. I can still produce file conflicts easily when I change files on disk before the auto-save is triggered.

But you know what? I didn’t even bother checking the auto-save interval settings. I want to save web links to my and tick off items from my mobile devices, that’s it. And when I’m on my mobile devices, minutes will have passed since I left my Mac and the files will already have synced. Same in the other direction. I don’t sit in front of my Mac and tick off stuff on my iPad. That’d be stupid. So I don’t produce sync errors under regular circumstances.

I’m happy with the result, and I hope your life will be better with this, too!

  1. Simple, of course, only in hindsight. As always, figuring out what I am really looking for when tweaking my Emacs setup is the worst part. For example, I was trying to write a ct/org-agenda-reload-all-buffers that calls revert-buffer for each agenda-managed buffer, and I was prepared to manually sync by hitting the “R” key from the agenda view like a caveman. Now I found a totally different approach. 

Add Navigation Buttons to NSTouchBar

Xcode and Safari sport Touch Bar items to navigate back and forth. Have a close look:

Xcode touch bar screenshot
Xcode’s Touch Bar
Safari touch bar screenshot
Safari’s Touch Bar

When you put two NSTouchBarItems next to each other, there usually is a gap. Between the navigation controls, there is a mere hairline divider, but not the regular fixed-width space.

They are not realized via NSSegmentedControl, though. Compare the navigation buttons with the system-wide controls far to the right: volume, brightness, play/pause. (I’m listening to the 1980s Pop Radio on Apple Music at the moment, in case you’re curious.) The system controls are a NSSegmentedControl. They have rounded corners for the whole control, while the navigation buttons have rounded corners for every button. Also, the navigation buttons have the default button width.

To create separate navigation buttons that close together, you will need to put them in a wrapper view and reduce the buttons’s vertical spacing. A NSStackView, it turns out, works just fine.

I also suspect each app is using its own set of images instead of the system provided ones since the chevrons are of different sizes in Xcode and Safari.

Here’s some sample code that works:

import Cocoa

@available(OSX 10.12.2, *)
fileprivate extension NSTouchBar.CustomizationIdentifier {
    static let mainTouchBar = NSTouchBar.CustomizationIdentifier("the-touchbar")

@available(OSX 10.12.2, *)
fileprivate extension NSTouchBarItem.Identifier {
    static let navGroup = NSTouchBarItem.Identifier("the-touchbar.nav-group")

class WindowController: NSWindowController, NSTouchBarDelegate {

    override func makeTouchBar() -> NSTouchBar? {
        let touchBar = NSTouchBar()
        touchBar.delegate = self
        touchBar.customizationIdentifier = .mainTouchBar
        touchBar.defaultItemIdentifiers = [.navGroup]
        return touchBar

    func touchBar(_ touchBar: NSTouchBar, makeItemForIdentifier identifier: NSTouchBarItem.Identifier) -> NSTouchBarItem? {
        switch identifier {
        case .navGroup:
            let item = NSCustomTouchBarItem(identifier: identifier)
            let leftButton = NSButton(
                image: NSImage(named: NSImage.touchBarGoBackTemplateName)!, 
                target: nil, 
                action: #selector(goBack(_:)))
            let rightButton = NSButton(
                image: NSImage(named: NSImage.touchBarGoForwardTemplateName)!, 
                target: nil, 
                action: #selector(goForward(_:)))
            let stackView = NSStackView(views: [leftButton, rightButton])
            stackView.spacing = 1
            item.view = stackView
            return item

            return nil

    @IBAction func goBack(_ sender: Any) { }
    @IBAction func goForward(_ sender: Any) { }

Here I use built-in touch bar image names, like NSImage.Name.touchBarGoForwardTemplate and touchBarGoBackTemplate.

You can also use your own vector-based @2x images. There’s no benefit over using the built-in ones, really. Just in case, I whipped up two images in my favorite image editor, Acorn, and exported them as PDF.:

Touch bar screenshot with my buttons
This is what my button images look like now. (Click to download.)

Download PDFs

In case you wondered: You can take screenshots of the Touch Bar by hitting ++6.

Replacing More Reference Objects with Value Types

I was removing quite a few protocols and classes lately. Turns out, I like what’s left.

I relied on classes for example because they can be subclassed as mocks in tests. Protocols provide a similar flexibility. But over the last 2 years, the behavior I was testing shrunk to simple value transformations.

Take a ChangeFileName service, for example. It is used by another service during mass renaming of files. It is also used when the user renames a single file. The updatedFilename(original:change:) method is the only interesting part of it. You call this method from other objects and verify this behavior in your tests. Then, one day, you look at the method body of updatedFilename again and find it can be shortened to:

func updatedFilename(original: Filename, change: FilenameChange) -> Filename {
    return original.changed(change)

That’s similar to what I was looking at. Filename and FilenameChange are value types (structs) and their transformations are well-tested. The updatedFilename method only makes this call mock-able. But what for? You can change the tests just as well!

This is a typical object-based test with mocks and assertions on transformations:

  1. Is the other object called? (didUpdateFilename)
  2. Is the other object’s result used? (testUpdatedFilename)
/// The test double/mock object
class ChangeFileNameDouble: ChangeFileName {
    var testUpdatedFilename: Filename = Filename("irrelevant")
    var didUpdateFilename: (original: Filename, change: FilenameChange)?
    override func updatedFilename(original: Filename, change: FilenameChange) -> Filename {
        didUpdateFilename = (original, change)
        return testUpdatedFilename

func testMassRenaming_UpdatesFilename() {
    // Given
    let changeFileNameDouble = ChangeFileNameDouble()
    let service = MassFileRenaming(changeFileName: changeFileNameDouble)
    /// The expected result
    let newFilename = Filename("after the rename")
    changeFileNameDouble.testUpdatedFilename = newFilename
    /// Input parameters
    let filename = Filename("a filename")
    let change = Change.fileExtension(to: "txt")

    // When
    let result = service.massRename(filenames: [filename], change: change)

    // Then
    if let values = changeFileNameDouble.didUpdateFilename {
        XCTAssertEqual(values.original, filename)
       XCTAssertEqual(values.change, change)
    XCTAssertEqual(result, [newFilename])

It’s a bit harder to test for an array with 2+ elements, because then you have to collect an array of didUpdateFilename and prepare an array for testUpdatedFilename that’s to be used by the MassFileRenaming service.

All you test is the contract between MassFileRenaming and ChangeFileName. Which is a lot, don’t take me wrong! But then you also have to test ChangeFileName itself to make sure it does the right thing and doesn’t just return some bogus values once you stop using a mock.

This test case can be changed to the mere value transformation itself without the “class behavior wrapper”. And since you don’t rely on a mock, it’s easier to test collections of values:

func testMassRenaming_UpdatesFilename() {
    // Given
    let service = MassFileRenaming()
    /// Input parameters
    let filenames = [
        Filename("first filename"),
        Filename("another filename"),
        Filename("third filename")
    let change = Change.fileExtension(to: "txt")

    // When
    let result = service.massRename(filenames: , change: change)

    // Then
    let expectedFilenames = { $0.changed(change) }
    XCTAssertEqual(result, expectedFilenames)

But isn’t this an integration test now?” Well, as always, it depends on your definition of a “unit”! You don’t write adapters and wrappers for all UIKit/AppKit method calls, either, and treat the Foundation.URL, Int, or String as if there’s nothing to worry about.

The Filename type, in this example, is reliable, too. If it’s in another module, like MyAppCoreObjects, and thus even more isolated from your other app code, wouldn’t you treat it the same as URL?

I still have to rethink some things with regard to value types. But one nice thing is that you can use self-created value types like any built-in types. No need to wrap in a class/reference object.

Maybe I need more time to adjust to this kind of thinking. I do know that the talk by Ken Scambler I mention time and again was very helpful to make the transition with confidence, like when I began changing RxSwift view models to free functions.

Do Not Apply Code Heuristics When You Need a Broader Perspective

You can only improve things inside the frame you pick. If your frame is too narrow for the problem you try to solve, you cannot properly take everything into perspective. That’s a trivial statement as it’s only re-stating the same thing, but it’s worth stressing.

Apply this to code.

If you focus on code heuristics to improve your code base, you cannot improve the structure of your program. Even though the structure is manifested as code, it’s not code you should be thinking about. It’s concepts. Code is just the textual representation you stare at all day. Structure is what the imaginary entities of your invention bring forth.

What are “code heuristics”?

In general terms, it’s thinking inside the terms of the solution space or implementation: Functions, classes, libraries. An opposite of the solution space is the problem space or problem domain. Your code to maintain air traffic contains methods that are called; the problem is about airplanes and collision courses. Your implementation is a translation of the problem domain, of what real people talk about, into machine-readable commands; that’s the translation from problem to solution space.

For example, the “Don’t Repeat Yourself” principle (DRY) is a code heuristic. It states that you should avoid pasting the same code all over the place. Instead, the DRY principle advises you to extract similar code into a shared code path, e.g. a function or method that can be reused.

Please note that “extract” is also a common refactoring term and “reuse” generally has a positive connotation. Doesn’t mean it’s a good idea to follow it blindly. Because when all you know are code-level tricks and heuristics to improve the plain text files, you sub-optimize for code. This endangers the problem–solution-mapping as a whole.

especially : optimization of a part of a system or an organization rather than of the system or organization as a whole

You endanger the solution you are trying to improve when your criteria of “better” and “worse” are only relative to the level of code – like “do these parts of code look the same?”

One problem with following DRY blindly is you end up with extracting similar-looking code instead of encapsulating code that means or intends the same. This is sometimes called “coincidental cohesion”. You wind up reducing code duplication for the sake of reducing code duplication.

If you confuse mere repetition of the same characters in your source text files with indenting the same outcome, you can end up with 1 single function that you cannot change anymore without affecting all the wrong parts of the program. This becomes a problem when you need to adjust the behavior in one place but not another. You cannot adjust this if the code is shared. (If you’re lucky, you duplicate the code again; if you’re unlucky, you change the shared code and break another part of the app.)

Take this code:

let x = 19 * 2
let y = 19 * 4

A DRYer example:

func nineteenify(num: Int) -> Int {
    return num * 19

let x = nineteenify(2)
let y = nineteenify(4)

But what if one of the 19 are about the German VAT and Germany’s tax legislation changes the rate to 12, and the other 19 is just your lucky number used for computing an object hash, or whatever? (Both similar-looking “magic numbers” represent concepts, albeit different ones in this case!)

Another code heuristic is called symmetry. The lines that make up your methods should look similar to increase readability.

Take the following code for example. You are not satisfied with it anymore because each line looks different than all the others:

func eatBanana(banana: Banana) {
    let peeledBanana = banana.peeled
    guard isEdible(peeledBanana) else { return }

To improve symmetry of the code and thus increase readability, you figure that this sequence is essentially about a transformation in 3 steps. Also, you like custom operators for their brevity and how code reads with the bind or apply operator: |>. Think of it as applying filters here. You increased terseness to reduce noise and end up with this

func eatBanana(banana: Banana) {
  self.prepare(banana: banana)
    |> self.checkEdibility
    |> self.assimilateNutrients

// Extracted:

private func prepare(banana: Banana) -> PeeledBanana? {
    return banana.peeled

private func checkEdibility(peeledBanana: PeeledBanana) -> PeeledBanana? {
    guard isEdible(peeledBanana) else { return nil }
    return peeledBanana

Looks better, maybe? If you understand how the operator works, this could be useful to state the intent of eatBanana more clearly as a step of value transformations.

Questions not asked in the meanwhile:

  • What does the rest of the class look like? Are there 100 other methods in the affected type? Does this one method become harder to read but the type becomes harder? (Another code-level heuristic.)
  • Whose responsibility is the preparation of bananas? Do these methods fit together into a single object? Is the receiver of these methods the best or just whatever is accidentally owning eatBanana? Should this be separated into multiple objects with more focused responsibilities instead? If you did that, would the original problem of symmetry vanish already? (This is not a code-level decision anymore.)

I hope this illustrates what a focus on the solution space brings forth. To take your code as a piece of text you can run grep or sed transformations on to improve it may work, but if all you do is extracting similar strings of characters from text files manually, do you ask the right questions?

In a similar fashion, Design Patterns are code templates. They help write code that solves a particularly intricate problem. Design Patterns are complex solutions, but they are confined to the solution space nevertheless. This includes the popular MVVM as well as the Factory or Visitor design pattern.

To implement MVVM, you need a Model type, a View type, and a View Model object/protocol/… that is based on the Model but exposes an Interface tailored to the View.” – This is a recipe for writing code. It does not and it cannot answer the tough question: what is the best abstraction for your model? What components do you need to translate the problems of air traffic control into an iOS app, for example?

Is an Aircraft class a good abstraction?

This question cannot be answered by reading more code. The answer cannot be sped up with recipes.

This question can only be answered by thinking about the problem.

I think part of software architecture is modeling the problem domain, and isolating your translation of the original problem from e.g. UIKit code properly. That’s when architectural heuristics come into play, like “use layers to separate different parts of your app and avoid a mess.” The VIPER pattern is a higher-level code pattern that affects your architecture.

And this is what bugs me about most of the conversation in our community at the moment: the conflation of Design Patterns (code heuristics) with “Application Architecture” (which goes way beyond that). When we keep these concepts separate, we can continue to focus on one at a time and get better. It’s like the DRY problem from above all over again: if your frame is too narrowly focused on visible repetition on your screen, you cannot have the bigger picture in mind.

If we continue to fail to distinguish these concepts, we end up losing the ability to ask some kinds of questions.

And I argue some of these are the really important ones that help your software as a solution to a real-world problem continue to thrive.

→ Blog Archive