Before WWDC 2015, I blogged about what we’d like to see in terms of new functionality in iOS 9 when Apple revealed their plans for it during their Worldwide Developer Conference. While we’re still watching videos and reading through updated documentation, from what we’ve seen so far we can now follow up on our wishlist, and react to the items we didn’t see coming but are still interesting to us nonetheless.
How did our wishlist fare?
To recap, we were hoping for the following things:
- Bug fixes for WKWebView, and functionality to bring it closer to what was possible with UIWebView.
- Bug fixes for UISearchController.
- Some way to perform full text searching in Core Data, without having to improvise a solution ourselves.
On the first two topics there’s not much to report – we’re still seeing the same issues with WKWebView and UISearchController in the first beta of iOS 9 that were were in iOS 8. Since there’s time for those to be addressed before a GM is released, and Apple have at least begun to acknowledge some of them, we’re still hopeful for a resolution. We drew a total blank on the final item: official full text search support in Core Data. A number of improvements to Core Data were announced (more on that below), but alas this was not one of them.
It’s not all doom and gloom though – there’s plenty that Apple revealed that we’re quite excited about. In no particular order…
CoreSpotlight Search API
While we didn’t get the full text search functionality we dreamed about, Apple did announce something search related that we’ll definitely be taking advantage of: a third party API for Spotlight, the OS search feature that allows users to search from Springboard.
By exposing our content for indexing by iOS our users will be able to search for their Bipsync content right from their home screen, without having to open an app first. It’s a convenience to be sure, but it could be argued that by supporting OS level integration we make it more likely that users will remain engaged with our apps. It also reinforces the notion that an app’s data is bound to the device, which is important to our users.
UI testing in Xcode
We consider UI (aka integrated aka functional) tests to have significant value; since they span the entire system, they offer a high level of test coverage with relatively little effort. We’re currently using Appium to execute our tests against a real app instance running on a simulator and also real devices. The tests are written in PHP and run with PHPUnit. This has been working well enough, but we’ve found the disconnect between the application code and the test code makes it hard to maintain an concentrated coding cycle where tests are written at the same time as the application code. Going back and forth Objective-C and PHP is jarring.
So the improved UI testing functionality that will debut with Xcode 7 is very appealing to us – especially the aspect of it that can translate a developer’s interaction with the app into Objective-C code, dramatically reducing the amount of code needed to get a test into the right state for assertion. Coupled with the newly integrated Code Coverage feature, this will aid us in ensuring that our app has a healthy, substantial set of tests. We might even write the test code in Swift, which would have the advantage of us getting our hands dirty with the language with little risk to the app.
Storyboards are a feature of Interface Builder which allow developers to map out an application’s workflow in a visual editor. They’re a bit “Marmite” – some developers love them, others hate them and instead opt to design their UI in code. The capabilities of Interface Builder have improved in recent years such that we now use Storyboards extensively, but not without issue: a common complaint is that it’s impossible to segue between (i.e. link) multiple storyboards, so applications inevitably end up with a single massive storyboard containing every UI workflow. These files are slow to open and work with, and spoil the experience in such a way that it’s unsurprising that some developers give up on the endeavour all together.
Storyboard References should change all that, by allowing developers to link from one storyboard to another and reuse them in a modular fashion. There’ve been hacks to do this sort of thing in the past, but they’ve always felt like hacks, and I’m wary of solutions that seem to be against the spirit of the platform. Once this feature is available we’ll be able to break our substantial primary storyboards down into a set of more manageable modular ones, and maintaining application UI will be a far more pleasant experience.
While Storyboards are a contentious topic among iOS developers, they have nothing on Auto Layout. First introduced in iOS 6, Auto Layout is a technique that involves using a set of constraints to define where user interface elements should be positioned on the device’s screen, relative to other elements. Interface Builder already had a layout engine commonly known as “springs and struts”, which was, and still is, comparatively easier to understand and use. As a result, many developers initially adopted Auto Layout grudgingly, if at all – there seemed to be no sense in using another tool when it offered little but additional complexity.
That all changed last year though when the iPhones 6 and 6+, and later the Apple Watch, were announced. Suddenly there were almost twice as many screen sizes to cater for, and Apple have done nothing to dispel the notion that there are plenty more to come. Certainly the days of iOS devices being limited to a small set of screen resolutions are over. And here is where Auto Layout comes into its own, designed as it is to manage the adjustment of UI components to suit any two dimensional space. But still the complexity remains – often hours of torturous experimentation are required to accomplish layouts for even simple designs.
StackViews will hopefully change that, by leveraging the power of Auto Layout but presenting it within a simple concept – views can be stacked horizontally, or vertically, and those stacks can be embedded in other stacks ad infinitum. This approach will account for the vast majority of view layouts and save developers from having to get too involved with the intricacies of Auto Layout itself. Without a feature like this, I’m loath to take on a complicated design with Auto Layout as I know I’m setting myself up for hours of frustration. With it, I’m excited to see what can be achieved.
Automatic NSNotification observer deregistration
In the grand scheme of things this one isn’t going to rock many people’s worlds, but I’ll certainly appreciate it. Until iOS 9 any object that registered for notifications from NSNotificationCenter had to also be unregistered before it was deallocated, otherwise developers risked crashes caused by bad memory access. It was an easy thing to do accidentally, as Brent Simmons recently described. Now in iOS 9 the system automatically does this for us, which means one less way to inadvertently introduce bugs into our applications. Win.
Keyboard command HUD
Another small but useful addition, this allows the user to call up an inventory of keyboard shortcuts when they’re using a hardware keyboard. Application developers are able to add their own shortcut commands to the inventory. This’ll be handy for us as it’ll mean we can expose our custom commands (like the insertion of a list or task in the rich text editor of Bipsync Notes) for discovery and consumption, improving the user onboarding process.
Core Data improvements
As I mentioned at the top of this article, Core Data has been improved with a few new features including batch deletion of objects, to accompany the batch update feature that was introduced last year. The ones that stood out to us were:
- The boilerplate NSManagedObject property/accessor code that is generated by Xcode is now stored alsongside the concrete class as a category, rather than inside the class itself. This means that we can customise those the concrete classes with no fear, since if we later evolve the class’ properties / relationships and regenerate the boilerplate code, such an action won’t overwrite our customisations, as happens currently. We’ve written about how we use Mogenerator to avoid this issue – hopefully this change will mean we’ll have one less tool to manage, as useful as Mogenerator has been for us.
- Core Data is now able to reason which objects are unique, based on a set of constraints. This should make our syncing code much more straightforward, as we’ll be able to simplify the algorithms that ‘create or update’ our managed objects to reflect the data from API responses. For more details on why this can be a problem, check out this talk by Matthew Morey.
- We’ve seen a few crashes in our sync code due to managed objects being deleted via the UI while a sync is running in the background; when the background sync then attempts to reference the deleted object, iOS throws an exception and the app crashes. It’s possible to work around the issue by refetching the object to see if it has been removed, but this isn’t foolproof and has to be performed so often as to be safe that it bloats the codebase. Now in iOS 9, by default a deleted object will behave like a ‘nil object’, and can be accessed safely without triggering a crash. This should make a lot of developers’ code more stable (it was announced to a healthy round of applause, always a good sign!).
Despite a lack of progress on the issues we’ve reported so far, iOS 9 looks to be what many of us were asking for – a release that is less concerned about new features as it is improvements and tweaks to make developing on iOS easier. The reduction in the amount of space required on a device in order to upgrade to this OS will hopefully encourage adoption, and we can employ these improvements in the very near future.