All of these features and developments revolved around the core mission of Instapaper, which is allowing our users to discover, save, and experience interesting web content. In that respect, there is a lot of overlap between Pinterest and Instapaper. Joining Pinterest provides us with the additional resources and experience necessary to achieve that shared mission on a much larger scale.
Instapaper provides a compelling source for news-based content, and we’re excited to take those learnings to Pinterest’s discovery products. We’ll also be experimenting with using our parsing technology for certain Rich Pin types.
Most software company acquisitions in the tech world at the moment are head turning and puzzling but this isn’t one of them, as far as I’m concerned. Pinterest is a social bookmarking website with a focus on photography. A purchase of Instapaper allows them to easily grow into other content types, namely text.
Hidden at the bottom of this announcement is a ‘sunsetting’ of Instaparser, a paid API endpoint for developers to take advantage of Instapaper’s intelligent article parsing. The service is shutting down in November. It launched in April, now being shuttered in the same year it was debuted. This is pretty crappy especially given Instaparser was a paid service charging hundreds of dollars per month for an API key.
Personally, I didn’t see the motivation for Instaparser to exist. It sucks for those who adopted it but shutting it down is the right strategy for a company that is otherwise very focused on just doing one thing. It was a mistake to release Instaparser at all.
How do we know when a new golf course opens up? We’re not exactly driving around looking for golf courses. But we know it’s there, because there are all these golf apps that get used at a golf course. If we see that all these golf apps are being used at a particular location, and we don’t show that as a golf course, we probably have a problem. You can discover that pretty quickly. It’s not as if you need a year, or anything like that.
The inferences made by the crowdsourced data are then followed up on with ground truth teams (people that drive around verifying locations actually exist), web research or checking satellite data. It’s unfair to say that Apple is powering maps by crowdsourcing but it definitely plays a role in finding points of interest and road changes.
What I think is interesting is how much Federighi and Cue play up the benefits of data collection elements, I’ve never seen them emphasise it like this before. Usually, it’s very quaint with endless assurances about privacy and anonymity. In this interview, though, they admit that the data they do collate is enough to accurately pinpoint new sports venues. It may not be personally-identifiable but the fact Apple can trace the construction of golf parks from App Store downloads is pretty wild.
Cue’s enthusiasm also runs counter to Apple’s company message that they don’t want your data. I am in the camp that Apple could do better stuff if it did collect more data and it sounds like Cue and Federighi are almost begging for the opportunity to use it. I hope Apple relaxes its privacy stance somewhat and offers features that simply aren’t possible without accumulating a centralised store of user data on servers. Health data cloud syncing is the obvious example.
iOS 10 includes over 100 new emoji in total with new female gender options for athletes and professionals, new family characters and redesigns of the most popular emojis.
Apple is working with the Unicode Consortium to ensure representation of diversity across emojis. In iOS 10, Apple is adding new female emojis for runners, cyclists, swimming, builders and more. There is also a new rainbow flag (assumedly representing LGBTQ Pride), and the pistol emoji is now depicted as a toy water gun.
In the Unicode standard, this character is defined as a ‘pistol’. The Emojipedia definition describes the emoji as “a weapon that has potential to cause great harm”.
With previous iOS versions, Apple presented this emoji as an actual gunmetal pistol. With iOS 10, they’ve changed it to a green toy water gun. I don’t like how they have handled this. This has nothing to do with the associated political implications of free speech and everything to do with the way Apple has implemented this technically.
My personal qualm is that Apple has distorted the integrity of the Emoji language by replacing the glyph for a character which has a very different meaning. A toy water gun depicts very different intentions than a real gun. The 🔫 emoji now has an ambiguous meaning because it represents different objects across devices and operating systems.
On iOS 10 devices, the emoji in the previous paragraph looks like a water gun. On iOS 9 or on my OS X El Capitan Mac (on which I am typing this post) it looks like a serious weapon — a handgun. By the way, VoiceOver will describe that character as a ‘pistol’ on all devices, including iOS 10.
I tried to show a more sinister example on Twitter where the meaning of a message can now be misconstrued if the recipient and sender are not using the same OS.
Rather than recycle the same character code point, here’s what I think Apple should have done. Remove the pistol emoji from Apple’s keyboard completely. Lobby the Unicode consortium to add a new emoji that depicts a ‘water pistol toy’ and include that in the keyboard.
This handling is better because it does not affect usage of the handgun emoji on other clients, Apple would just be disabling the ability to write it on its systems. The water gun emoji would then be true to itself with a unique code, a consistent depiction of a toy water pistol across all platforms that support the latest Unicode standard and the correct labelling for VoiceOver accessibility.
Assuming Apple followed my suggestion, there’s still a question about how to display a message that includes a handgun emoji sent from someone else. Apple could let the symbol display, meaning iOS 10 would allow users to read, but not write, the gun emoji. Alternatively, it could treat the handgun as an unknown character and display the typical ‘character unknown’ black box placeholder glyph. I think either is fine, although the latter makes a more forceful statement on society.
Mr. Cue is also known for a hard-nosed negotiating style. One cable-industry executive sums up Mr. Cue’s strategy as saying: “We’re Apple.”
By 2009, Apple executives were considering a subscription streaming-TV service. To entice media companies, Apple offered higher fees than pay-TV providers for their broadcast channels. But Apple wanted only certain channels, so the effort fizzled.
I think an implicit line in this (extensively-detailed) piece is that Apple has messed up by not having a TV product out the door already. The Eddy Cue anecdotes are funny but there’s definitely a sense of blame, as if Cue’s lack of cooperation is a problem blocking a resolution.
I disagree. Aside from perhaps the Hawaiian shirt incident, it just sounds like Cue is portraying the needs and wants of Apple. Apple clearly doesn’t want to compromise on its vision for a TV service, so if it can’t get the terms it wants, it doesn’t pursue a deal. Cue isn’t causing frictions, he’s just representing Apple’s set-in-stone wishes. This isn’t a ‘Apple needs Steve Jobs’ problem either; the report documents a decade of failed negotiations with media companies, including when Jobs was alive.
In fact, it would be worrying if Cue was too complicit and arranged a halfway-house solution. Apple repeatedly says it will only do things it can do well and rejects things that are good, not great. Bowing to the wants of backwards-thinking media companies isn’t fulfilling that criterion.
Holy. I just discovered the force touch Clear All on the watch notifications screen. Just now. July 25, 2016.
The subtext to this tweet is a criticism of Force Touch in terms of discoverability. On this particular case, I don’t think this is a UI issue. In fact, it’s an ideal use of Force Touch in the Apple Watch interface; revealing an action that relates to the context of the current screen.
Concessions have to be made for a screen canvas that is the size of your wrist. If a button was present on the screen, I have no doubt that Drance would have found the Clear All function sooner. If we were talking about an iPad, where more screen space is available, I’d agree that some form of Clear All action should be more obviously present, probably as a text button at the bottom of the list.
The reality is an Apple Watch display is simply too small to put everything on screen at once. Some things have to be slightly obscured. Secondary actions1 cannot afford to take up persistent space in many cases: there simply isn’t the real estate. Some concessions have to be made and I believe that the right design decision was made here.
Clear All is less important than actually viewing the list of notifications, adding a button for Clear All would impair that (far more) frequent activity somewhat.
Heck, even on macOS most actions for the current app aren’t visible onscreen all of the time, but we still know they are ‘there’ because we have habitually learnt the vocabulary of the operating system. Things like using right-click, popover panels and menubar items become ingrained behaviours.
Apple is establishing a similar behaviour for Apple Watch; Force Touch to reveal contextual actions for the current screen. The actions should be relevant to the modal view as a whole, not connected to a particular element of where you pressed down.
Force Touch to Clear All is a coherent use of this system function. Becoming familiar with Apple Watch should involve an instinctual feeling to deep press screens when you want to perform a contextual action. This is consistent across Apple’s apps and should become learned behaviour over time.
Even when there aren’t any contextual actions for a view, Apple Watch UI will always respond with a screen bounce animation to suggest that the user was right to attempt the gesture and should try again on another screen.
Maybe watchOS is too new for it to become a natural instinct yet, maybe Apple should help educate users better about it. I don’t think the premise of placing Clear All action behind a Force Touch gesture is a bad design decision, though.
1 If you use ‘Clear All’ frequently such that it feels like a primary action, my opinion is you are using notifications wrong and should consider turning off alerts for some apps in the Apple Watch settings.
When true self-driving is approved by regulators, it will mean that you will be able to summon your Tesla from pretty much anywhere. Once it picks you up, you will be able to sleep, read or do anything else enroute to your destination.
You will also be able to add your car to the Tesla shared fleet just by tapping a button on the Tesla phone app and have it generate income for you while you’re at work or on vacation, significantly offsetting and at times potentially exceeding the monthly loan or lease cost. This dramatically lowers the true cost of ownership to the point where almost anyone could own a Tesla.
In cities where demand exceeds the supply of customer-owned cars, Tesla will operate its own fleet, ensuring you can always hail a ride from us no matter where you are.
This last item is the most interesting thing in Musk’s ‘master plan deux’. Cool idea, but it seems very pie in the sky. Reliable, foolproof, autonomous intelligence is still many years away … and that’s just half of this concept. The other issue is getting people to volunteer their cars to a self-driving fleet — surrendering their vehicle for unknown times to unknown people.
The good thing about taxi cabs and current ride-sharing models is the cars are manned by a human: someone is always there to monitor the actions of the travelling passengers. Leaving my car in the sole possession of someone else is an uncomfortable notion. The cars may drive themselves but they don’t clean themselves or repair themselves.
The last sentence is a much clearer path, where Tesla owns and maintains dedicated vehicles for taxi services. It still depends on the realisation of autonomy but the business model is clear. It’s Uber … without the overhead costs of paying drivers.
The following is a discussion of Swift 3’s controversial approval of the ‘sealed by default’ proposal that puts constraints on subclassability. To contextualise the decision, it is first necessary to review how Swift approaches access control.
In Swift, types and members are defaulted to internal visibility. This means they are only visible within the scope of the same module. In another module, internal types are not accessible at all. Making these things accessible requires a public keyword for every symbol. This means nothing is exposed to the wider project by default unless it is explicitly marked; only things that the developer have chosen to be available to other modules are.
This sounds onerous but it actually makes sense from a codebase design perspective. Generally, most methods and properties written into a class or struct are implementation details which are irrelevant to other consumers. As code is read more often than it is written, the benefits of distinguishing a public and private API surface outweigh the burden of having to write a public declaration every so often.
This ideology is central to Swift, favouring explicit statements over implicit behaviours. This is done primarily, but not entirely, to express the best coding practices. Developers have to make a conscious decision which parts of the interface are public and which aren’t. It enables for potential performance benefits like static dispatch and intelligent high-level features like Generated Headers.
All of this strictness is uncomfortable to Objective-C developers which is a lax language; it lets everything be ambiguously public or private at the mercy of the programmer. It was uncomfortable to me. Swift allows for the same dynamic runtime features1, but it wants those capabilities to be explicitly defined and constrained only to the symbols that requires them.
The title of the post has nothing to do with any of this functionality, of course. There are parallels that you can draw though with clear similarities in how Swift is thought about and designed.
‘Sealed by default’ is a separate concept to runtime manipulation or access control in regard to its functionality; sealed classes cannot be subclassed outside of the module they are declared in. The underlying premise of only enabling functionality when it is appropriate is the same, using keywords to denote special entitlements.
Objective-C barely has the concept of modules, let alone being sealed. Any class in Objective-C can be inherited and overridden regardless of what framework it resides in. Swift 2 already has some limitations on this freedom. Although anything can be subclassed by default in Swift 2, there is a final keyword that prevents any source from subclassing it (essentially becoming a reference type struct).
final is more restrictive than sealed which is more restrictive than open (the implicit Objective-C behaviour). Sealed classes are still open inside their own module. This allows flexibility for the module maker (supporting the common class cluster pattern) whilst remaining closed to the rest of the codebase.
The concept of sealed classes does not exist in Swift 2 at all but is going to be the new default in Swift 3. Developers of modules can add the ability for classes to be subclassed by anyone using the open keyword on relevant type declarations.
This choice for classes to be sealed by default with Swift 3 has caused a lot of controversy; even the core team admitted there was no consensus in their mailing list post approving the change. I think it is the right thing to do but it’s not hard to see why others are angry.
The change removes the capability for application developers to subclass third-party library and framework code. The module defines what can and can’t be overridden. Sealed doesn’t affect a developer’s own classes, but it does stop developers from overriding framework classes, like those found in UIKit and AppKit.
Developers can use clever subclassing tricks to resolve some bugs that exist in third-party frameworks. These are almost always unsupported brittle changes, though, that aren’t guaranteed to be stable or keep working between OS versions.
To be frank, it is a fluke that this stuff even works. Subclassing where you aren’t supposed to is essentially injecting code into someone else’s private components. Ask any Apple engineer and they will tell you never to subclass UIKit. In Objective-C, this is only expressed via documentation and guidelines. With Swift 3, it can be enforced in the code and is compulsorily adhered to.
Perhaps there is a debate here about the usefulness of subclassing to combat bugs. I don’t think it is very useful though and will get even less useful as people write Swift frameworks in Swift where classes aren’t even that common and instead relying on structs or enumerations. A good example here is to look at the adoption of C libraries, here is any C library, which are made up of free functions. These functions can and do have bugs with no recourse via inheritance. This has not stunted adoption.
In general, language design should not be decided by the possible existence of buggy code. However much we strive to make perfect code, there will always be bugs. Sealed by default also prevents a different swathe of bugs from happening as API users don’t have to rely on humans to check documentation about whether something is supported. Sealed, final and open allow coders to accurately convey how their APIs are meant to be used, at least more accurately than Objective-C did.
As highlighted by the preface of this post, I hope the parallels between stricter rules about inheritance and stricter public-private access control are self-evident.
Designing and enforcing rules for inheritance is aligned with Swift as a language. It would be inconsistent not to have sealed by default with explicit keywords to allow for stricter or looser inheritance. It brings several benefits. Static dispatch can be employed more frequently when the compiler can guarantee there are no external subclasses. Performance benefits for a GUI application are minimal, granted, but every little helps.
Of course, the primary reason is creating a programming model that is more correct with proper encapsulation and containment. Classes that aren’t meant to be subclassed, can’t be. That has to be better than an ambiguous spaghetti mess.
I think if you can understand and agree with the explicit marking of things as public or not, then you should hold no objection to the sealed by default proposal. Explicitness in cases of ambiguity is a theme of Swift. Rather than guessing or choosing a lazy default that accepts anything, it is stringent in its enforcement. Accommodating debugging or monkey patching — when it flies in the face of the overall language — makes no sense to me.
The last thing I’ll say is that doing ‘sealed by default’ with Swift 3 makes the most sense when you consider the project’s roadmap. Apple wants Swift 3 to be the last major source-breaking release. Deciding to be restrictive now, with sealed by default, and then backtracking later is not a source-breaking change. Apple can freely make things open again … if the change really is destructive. It’s not possible to go back the other way, from open to sealed, in a source-compatible matter later on.
Even without any knowledge of the pros or cons of the argument, logic indicates to do the more bullish thing now as the option to reverse it remains available.
1 Current Swift doesn’t have the full range of dynamic features that Objective-C has, not even close. I don’t think there is a philosophical aversion to adding that stuff in future Swift versions, however.
Will we see an Apple skinny bundle or live-TV streaming service?
Whether we’re providing it or somebody else is, it really doesn’t matter to us. What we’re trying to do is build the platform that allows anybody to get content to consumers. If a Time Warner [Cable] or a DirecTV wants to offer a bundle themselves, they should do it through Apple TV and iPad and iPhone. As a matter of fact, I’m not a big fan of the skinny bundle.
I think it’s a misconception. Most people, at the end of the day, end up paying more, not less, for the things they love. With TV content being at an all-time high, why are people asking for less? It has a lot to do with the way it’s being provided. If I feel like I’m not getting my money’s worth, then I want to pay less and I want less things. But if it were being provided in a rich platform with the capabilities I’m talking about, I don’t think people would feel that way. People pay for Netflix as an add-on to TV, and they’re happy doing it. And why is that? Because they’re happy with what they’re getting from Netflix. So the question to ask about skinny bundles is, why are customers not happy?
Honestly, this sounds like Cue giving up. It seems that Apple was chasing a master plan for television (enough rumours and comments by TV execs to support it) and has now cancelled those plans, facing resistance from many parties over contractual terms.
This is disappointing for my view on what Apple needs strategically. I see original content as a necessity in order to stay relevant. Clarkson’s Top Gear show is a great example of something that is now completely outside of Apple’s control and will always be an ecosystem disadvantage. Amazon has no incentives to share its exclusive content to other platforms. They can shut out Apple TV indefinitely. Apple needs a magical agreement with the likes of Netflix and Amazon, or it needs its own leverage with its own compelling shows.
People used to build Hackintoshes because they were a good bit cheaper than an equivalent Mac you could buy from Apple. Now though, since the Mac Pro hasn’t been updated in 3 years (and the components on the board were already about a year old) people are building Hackintoshes because you can build a Mac that is faster than the fastest computer Apple can sell you and, oh yeah, it costs $1,000+ less than even the base Mac Pro model. If you’re a professional photographer, programmer or video editor that currently owns a Mac Pro, Apple has forsaken you. Maybe it’s time to look into building a Hackintosh.
An inexpensive ~$1200 Hackintosh build is faster than any Mac hardware Apple currently ships as well as being significantly cheaper. I’m not surprised that buying your own components and assembling it yourself is cheaper than what Apple sells pre-configured; there’s a huge price gap between pre-built and self-built Windows PCs too. Official Apple Macs also come encased in custom-designed enclosures and casing that will never be matched by a generic ATX tower, which again mitigates the price difference somewhat.
Another reason why the Hackintosh solution compares so well on the price-performance scale against a Mac Pro is because Apple has left it languish and hasn’t updated its internals (or compensatory price drops). The 5K iMac is a great counterfactual here; a powerful computer that represents very good value for money, having been refreshed recently.
Obviously, I wish Apple would strive to make everything cheaper but the bigger strategical issue, in my view, is this power differential when Apple abandons products for years a time, an increasingly common occurrence.
Apple’s business model is to sanction only a handful of products for its platforms which is normally fine and attracts most of the total market. However, not updating these hardware lines on a regular basis is a disservice to the Mac platform. If they are no longer interested in these segments, the products should be discontinued, not left to linger like a rotting fruit. Right now, the biggest offenders are the Mac Pro and Mac Mini.
I’m not demanding year-over-year major overhauls but a pipeline of spec bumps and component improvements in line with the industry should be a requirement of these product categories staying in the lineup. I don’t really care if Apple wants to charge even more money than they do already for this … but it should be possible to buy sanctioned top-spec internals for Macs.
For average techies, Rundle describes how making a Hackintosh is actually pretty easy as long as you stick to the online community guides for what to do and what to buy. It seems as painless as building any computer from parts (which is not very hard at all, it’s like a 3D jigsaw puzzle where you have the assembly instructions for the solution).
A standout annoyance is the lack of iMessage support as it is seemingly tied to hardware serial numbers. My guess is this is related to the underlying iMessage encryption processes somehow. I would worry that this is a trend and incompatibility would spread to more Apple services as the company continues to enforce higher security policy across the board.
I think the biggest drawback is the necessity to wait when new software updates come out for others to verify compatibility. This is related to the perpetual looming threat that, one day, Apple software could cut the Hackintosh industry off completely (whether on purpose or just by coincidence) and nothing will work ever again. This is fine if you can bear converting your hardware investment into a plain Windows PC as a final backstop.
With Apple’s current schedule, we’d be in line to get a major upgrade this fall to the iPhone 7, with last year’s iPhone 6s and 6s Plus being the in-between “S” year models. But on a three-year cycle major hardware upgrades will come every third year opposed to the every other year schedule Apple is currently on. Word has it that Apple could make that change starting with this year’s iPhone, meaning this fall’s iPhone could be a lesser upgrade and that a full-fledged iPhone 7 with notable hardware changes won’t arrive until 2017. But where does that leave Apple’s naming scheme for the device?
Everyone loves to debate product naming, including me. At least for iPhone, the branding has been easy to guess for many years thanks to the cyclical tick-tock cadence of major chassis design revamp followed by an ‘S’ series incremental component update.
This year, as all rumours are indicating, the cycle is changing. This year’s flagship iPhone looks almost the same as an iPhone 6s, which is itself a derivative of the iPhone 6 from 2014. Add into the equation the fact next year’s iPhone is rumoured to be a major ‘all glass’ design revamp, and it’s hard to say that the new 2016 iPhone will be called the iPhone 7.
It doesn’t feel like it lives up to the stature that the 7 nomenclature implies. It also puts the 2017 iPhone in a sticky situation for names: iPhone 7s doesn’t seem appropriate for a year when the device heads in a completely new direction design-wise. Apple could leapfrog and jump to straight to 8 but that feels weird given the generational history.
Names like ‘Extreme’ or ‘Air’ don’t really mesh with my sensibilities either. ‘Air’ sounds like a design change (iPad Air was significantly thinner and lighter than its predecessor) and ‘Extreme’ sounds corny. ‘Pro’ is the best suffix that has been suggested: would it be iPhone 6 Pro or iPhone 6s Pro? That’s a lot of syllables even for the 4.7 inch model, the 5.5 inch size would be awkward to speak and write: ‘iPhone 6s Plus Pro’.
Right now, I’d still bet on iPhone 7. It is the most obvious choice, even if it doesn’t quite fit the bill. In my mind, next year, Apple would move away from numbers at the logical design breakpoint, skipping the ‘7s’ conundrum and moving to something like ‘iPhone Air’ as the flagship branding.
The new Music app in iOS 10 looks different, drastically different. It’s been revamped with new font styles, navigation, layouts and animations. In general, app design is a combination of visual aesthetics and behaviour; evaluation of the changes Apple has made should consider both of these points. This is all subjective (which ultimately is what makes it hard) but the general consensus opinion on iOS 9 Music was one of confusion. The way it behaved was difficult to understand.
I think the new Music app represents a huge improvement in that area, greatly enhancing usability. Addressing my primary complaint, library navigation is now direct and obvious. There is a plain list of full-width buttons that directly open the primary views into the local music. If you want to see albums, tap Albums. If you want to start a playlist, tap Playlists. If you ever get lost in the menus, keeping hitting the Back button until you get back to this list.
It’s a huge improvement over the iOS 9 drop-down selector thingy which was a non-standard UI widget that actively hid navigation controls behind an additional button press. The iOS 10 app even allows for some personal customisation: tapping the Edit button reveals drag handles and additional toggleable rows. For example, in the old design, all users had to always see (and skip over) the Composers filter … which very few people care about. In iOS 10, the button defaults to hidden and can be turned on if desired.
One of these list items is ‘Downloaded Music’ which shows only tracks and albums that have been saved to local storage. Apple is plainly responding to user feedback that people couldn’t work out what was stored in the cloud and what wasn’t. Downloaded Music answers this question unambiguously, even adding an additional explanatory banner on detail views if the filter is applied. Circling back to the customisability, if being able to cleanly distinguish what is available in local storage is unimportant or unnecessary, the list item can simply be unchecked and hidden.
The persistent bottom bar tabs have also been tweaked to truly represent the primary features of the app. ‘Connect’ has been ditched from the tabs (a failed social network service does not justify such a primary position in the interface) and ‘New’ has been renamed to ‘Browse’. The ‘Library’ tab (nee ‘My Music’) has also been moved to the primary (first) slot in the toolbar, where it always should have been.
Up Next has also been reworked and appears inline to the Now Playing screen. Simply scroll down the view and the upcoming tracks are listed below — Up Next is no longer hidden behind yet another modal. It works better spatially: thinking of the view as a progressive timeline, the next songs are now ordered beneath the currently playing track. Shuffle and Repeat are also located nearby too, although I think there needs to be a way to Shuffle All from the main screen.
That’s the behaviour; big wins across the board as far as I’m concerned. Aesthetics are a different kettle of fish. iOS 9 Music was pretty boring in terms of visuals, mostly reusing stock UI components and doing an average job in areas where it did rely on custom elements.
The new app definitely makes more of a statement pushing iOS onto a new design trajectory with big bold fonts. The Apple News and Home app also have adopted this style. It’s not clear if Apple wants to move towards this style (which is defined by its use of heavy font weights and comically-large elements) for all of its apps — it’s still early stages.
I have mixed feeling on the appearance. I like the proliferation of buttons with backgrounds (such as the pink circle around the ••• button on Album list views) as well as the shift away from translucency for most things. The thick title fonts are a regression — especially as it is applied inconsistently. Navigation bar titles continue to use the normal system fonts, for example. The font size of the main headings is just laughable; it feels like you have cranked up the Dynamic Type accessibility options.
The Search screen is another example of inconsistency. Whilst the text field has been ballooned to a larger-than-normal size, the segmented control and ‘Cancel’ buttons are as small as ever with similarly small fonts. It does not match. The humongous components also seems like an inefficient use of screen space which is funny given 2016 was the year Apple reintroduced a 4-inch iPhone to its new device lineup.
Visually, I love the new Artist and Album list views. Putting two big photos of album art per row looks great and cleanly splits the screen in half on my 4.7 inch iPhone 6. I wish the section titles used a larger typeface as they get lost amongst the cells.
The new look for the Now Playing is decent. I like how the album art pops up from the page (shadows!) to signify the song is playing and recedes back into the frame when paused. This is capped by a subtle bounce effect. I also like how the scrubber thumb increases in size when in use and nudges the time labels out of the way when the user drags it to either end of the line. I don’t like the spacing between the song name and the scrolling secondary information row of text; the padding is too tight.
Another negative is the interaction to show and hide the modal is not 1:1 — it doesn’t follow the finger. As soon as a swipe gesture is detected, it fixes into the final state. The playfulness of being able to cancel the gesture mid-flight1 is lost; the previous incarnation of the app did this properly.
The ‘additional options’ sheet (activated by tapping a ••• button or long-pressing on album and song items) has also been redesigned to feature rounded-corner sub-sections. This view was poor in iOS 9 too and it hasn’t gotten better. In fact, I’d argue the applied border radius has made it worse. Your eyes are confused with the sudden appearance of four arbitrary blocks of content with irregular interactions. Moreover, buttons to ‘Love’ and ‘Dislike’ are presented side-by-side with near-identical iconography. Not to mention, every element in this view is tinted bright pink. It’s ugly. It’s even worse on smaller iOS devices where the middle list section will scroll if space is constrained. The scrollbar indicators in this state do not respect the corner radius and are naively (hideous).
Neither design was perfect but if I had to choose, I’d pick iOS 10’s attempt over iOS 9. The usability is the main reason for this: the simple fact Music is getting a ground-up redesign after just one year is enough evidence to prove that Apple messed up badly the first time. As I hopefully expressed above, it’s still a mixed bag as far as aesthetics are concerned for the iOS 10 Music app. I think the Heavy Fonts look would work better if Apple had gone all the way and brought it to every app in a wide-reaching system overhaul. That is not the case, so it (sadly) sticks out in the crowd.
1 This is an overlap of behaviour and visuals. You could argue a non-cancellable animation is worse for both axes of user experience.
Leading up to the event, the general opinion regarding watchOS was a wish for Apple to rethink the structure of the mental model. I think it was clear to everyone that complications, glances and apps was too much. Three related-but-separate views into the same application was overkill and exacerbated further by the Watch’s sluggishness, which made switching in and out of the different states frustrating.
The community consensus was asking Apple to ditch apps and focus on status update interactions, notifications and glances for quick actions. What Apple did was cut out glances and make complications a primary entry point into apps. Apps that are represented by complications are prioritised by the system and kept in memory as much as possible, enabling them to be launched instantly.
Apple has also stated that is redesigning apps to make their opening screens display usable summaries of information and place primary actions upfront. This has two benefits. In combination with the Dock — the new favourite apps view that appears anytime you press the side button — watchOS 3 retains much of the utility of Glances (quick information) even if they don’t exist anymore.
Activity was my most popular Glance by far on watchOS 2. With watchOS 3, I’ve put the Activity app into my Dock. As the screenshots in the Dock regularly refresh with latest content, I simply press the side button to ‘glance’ at my rings. Tapping on the preview jumps me into the app immediately thanks to instant launch. They’ve managed to successfully remove Glances entirely, reducing complexity, but retain most of the utility they offered. (Heart Rate is now present in the system as a standalone application.)
The changes also helps the OS feel more familiar for iOS users, as the Dock is similar to the iPhone multitasking interface. Just like complications, putting an app in the Dock tells the system to save it in RAM enabling instant launch most of the time. I only use four or five apps on the Watch regularly so I’ve put them in my Dock. With watchOS 3, my most frequently used apps are readily available and also launch in under a second. It’s great. Apps that haven’t been frozen in memory still launch as slow as ever obviously.
Again mirroring iPhone, swiping up from the bottom of the clock face reveals a new Control Center panel. It’s cool that they are carrying over the metaphor but the current design of watchOS Control Center is mediocre: it’s just a mess of buttons. I would like to see that cleaned up in future betas.
The Dock replaces the Friends circle as the action that happens when you single press the side button on the watch hardware. In fact, Friends has been removed entirely from watchOS. Messaging your favourite contacts is now handled, logically, by the Messages app. You can still double-click the side button to activate Apple Pay as before.
watchOS 3 also introduces a few new watch faces and I love them. I’m addicted to ‘Numerals’ and ‘Activity Digital’. Thanks to a new edge-to-edge swipe gesture, it’s also really easy to swap between them. I change to the Activity face when I’m consciously thinking about closing my rings for the day. When I’ve hit my daily activity goals, I simply swipe back to the minimalist Numerals face as the fitness information is no longer important to me. It’s so cool how the number moves with the hour hand around the day.
Aesthetically, I’m not a huge fan of Dark Mode. I think it restricts the colour palette for other elements (such as tab bar tint colour) leading to repetitive apps that have no distinctive personality: everyone trends towards dark backgrounds with blue and orange accents. This is especially true if Dark Mode means a theme that is meant to be easy on the eyes at night, not just an appearance style that is predominantly dark. Windows Phone attempts to combat the boringness of black with rich animation and fancy transitions, to some success. Even then, Microsoft is pivoting away from the darkness with recent software revisions, adding more vibrance and bright elements.
There’s also no getting away from the fact that a lot of apps are comprised mainly of full-colour photography feeds, like Facebook and Instagram. Full-colour images look terrible with dark chrome in scrolling lists; by their nature of being photo-realistic, they can’t match the surrounding UI. Dark Mode is crippling for these uses and it just so happens these uses are very common tasks for phones. What I’m saying is, for a lot of apps that are used by actual people, dark interfaces are not a good thing.
Dark Mode also ‘doubles’ the workload on developers and designers. It causes apps to split their resources between light and dark appearances ultimately compromising the beauty of both. I think many apps still look bad with just one colour scheme to consider, following the transition away from skeuomorphism. I believe there’s a lot more work to be done with what we have today before thinking about supporting another branch of the design language. I would be more in favour of a dark iOS if it was the new base UI, replacing the iOS 7 white aesthetic completely.
Ignoring personal preferences and in spite of those issues, I do think Dark Mode has a good chance of happening in the iOS 10 cycle. For one thing, a lot of people want it. I asked on Twitter about iOS 10 feature requests and many people asked for Dark Mode. I’m not sure if people want it because it looks cool or because it helps reduce eye strain at night. If it’s the latter, Apple has already started addressing that issue with Night Shift and I can see them pushing that further with a fully-fledged night UI toggle.
Another factor in Dark Mode’s favour is the looming rumours for the 2017 iPhone which will include an OLED display for the first time. In general, OLED devices prefer dark user interfaces as the screens are incredibly power-efficient when showing black pixels. OLED contrast levels are also very good so dark themes simply look nicer. Apple Watch UI is black for this reason; back backgrounds are so dark it blends in with the bezel.
Bringing Dark Mode into the ecosystem ahead of the OLED iPhone release allows third-party apps to start the transition sooner which means. That being said, I find it difficult to believe that Dark Mode will be present in the iOS 10.0 builds announced next week. The feature could likely come with a later iOS 10.x update. I reckon we’ll have another significant iOS feature update in mid-season, just like iOS 9.3 this year.
After starting a streak on April 1st, closing those Activity rings each day has been the biggest motivation to keep up my new routine. I missed two days near the end of April when I was under the weather, but picked back up and haven’t missed a beat in 42 days.
Closing the Activity rings is rewarding, but actually seeing results has also been a big reason to keep going. Apple Watch revealed one thing for certain during year one: my lifestyle is generally pretty sedentary throughout the week. If I don’t make an effort, I can easily go a whole day and only burn 150 active calories.
Zac has succeeded where I have failed, beating his Move goal for every day in the month just gone. Over a year since Apple Watch’s launch, I would have hoped that I could have managed the same thing at some point … alas laziness. What’s interesting though is that his achievement has actually motivated me to do it too. I was planning to go for the month-long award soon anyway but now I want it doubly so; there’s an implicit social pressure.
Apple’s software could do better to assist here: there is no social element to the Health app or Activity app. It would be cool if they build on the Activity app medals system and introduced things like shared leaderboards and achievements. Gamification can be annoying and cheesy but Apple has the design sophistication to execute well. Even something small like a dashboard of live Activity rings from family and friends, and the number of steps taken by them so far today would be incentivising. Nothing too in your face, no push notifications bragging about your social circle’s achievement, no pressure. Just a list that you can look at and see how you compare, if you choose.
According to a new note from reliable Apple analyst Ming-Chi Kuo at KGI Securities and corroborated by our sources, Apple plans to introduce a dramatically overhauled MacBook Pro later this year. Kuo says the device will have a new “thinner and lighter” design with design cues taken from the 12-inch MacBook, as well as Touch ID support and a new OLED display touch bar above the keyboard.
The OLED display touch bar will replace the physical function keys along the top of the keyboard, while the design will adopt new metal injection molded hinges as reported earlier this year.
It’s certainly a novel direction to take the MacBook line, adding dynamism to a keyboard layout that has remained the same for many years. A lot of Windows laptops includes a row of illuminated capacitive buttons but Apple is going further. It’s essentially replacing the function keys with a (really skinny) OLED touchscreen that can display any arbitrary UI. I think Apple chose OLED for the contrast levels, I can envision how the deep blacks of the screen look great alongside the piano black keyboard keys.
It’s not obvious to me how Apple is going to use this secondary display. Because it isn’t the primary display and because it can’t be a mandatory requirement to use OS X, as Apple will still be selling millions of Macs without a OLED accessory bar, I fear it might be an underused gimmick.
As Nintendo fans will know with the Wii U, making interfaces that interact between multiple screens is tough. What happens is that both displays battle for the user’s attention simultaneously but it turns out that ultimately one screen naturally monopolises the focus. In the case of MacBook, the primary canvas is the 15 inch Retina display. Demanding the laptop user to look down constantly is laborious and annoying. The natural laziness of people means most do not want to be nodding dogs; there’s a reason why touch-typing is so popular. Aside from physical strain, juggling multiple displays is simply a lot of information to take in. Creating UI conventions to signal when users need to check their dashboard display is incredibly hard. Putting critical information on the secondary display is a risk if the user simply forgets to check it.
The other end of the spectrum, then, is to keep the OLED screen content pretty much static. Limiting dynamism simplifies the mental load and enforces clear patterns of expectation about when the user is supposed to interact with the accessory display. Perhaps preferences allow for some customisation of what can appear there — the crucial point is that the buttons wouldn’t change passively whilst using OS X.
Although that would remove the problems I enumerated, it is a functionality tradeoff. What I’m describing in the second case is not that far removed from what exists already, i.e. a fixed set of function keys. In fact, it would be a regression in this case: the tactility of actual physical buttons would have been sacrificed. This is why I’m in a quandary. I would be concerned if Apple incorporated a significant new hardware change without a compelling use case to justify its existence.
A lot of people could argue that Force Touch was exactly that, a Mac hardware feature that was/is a dud. The impact with this rumour has more potential to be destructive. Force Touch on OS X can simply be ignored with no downside. An OLED button bar that replaces function keys cannot be ignored, it will have to be used by every new MacBook Pro owner. If its bad or mediocre, every customer will be impacted.
Google had its I/O conference this week, hosting its presentation of its latest announcements and outlook on what can best be described as a pop-star concert stage. I think the venue was a mistake but the presentation itself was markedly better than previous years. Clocking in it at two hours, the Google IO keynote is finally down to an acceptable length. Just a couple of years ago, they would run two 3 hour presentations on consecutive days.
One thing they unveiled was a FaceTime competitor called Duo. Specifically, there’s an element which struck a chord with me. When videocalling someone else, the recipient sees a live stream of the caller’s video as it rings. One side of the video call has already begun at the moment of the phone ringing. The other person can then pick up the call to start the two-way video, seamlessly transitioning into the conversation as the video of the person on the other end is already live.
It’s a fantastic streamlining of FaceTime. They also emphasised the instantaneous nature of the protocol allowing the two participants to community immediately after the call is confirmed. FaceTime’s usage model is a lot colder. One person asks to call someone else, the recipient sees the person’s name and a static image. When the call is answered, the video streams attempt to initiate a connection, which involves staring at a Connecting indicator for a few seconds, before finally succeeding to allow the two people can see and talk to each other.
The current FaceTime flow is as bad as a traditional phone call, which is basically what FaceTime is (in the same way iMessage is a 1:1 reproduction of SMS transmitted over the Internet). With Duo / Knock Knock, the call has effectively already begun as soon as the phone screen lights up on the receiving end.
Google showed how the caller could signify intent during the time waiting for the other person to respond. The user on the receiving end can pick up context from the Knock Knock video stream, such as where the person is, what they are doing or who they are with. Google showed potential with examples of people holding up movie tickets, engagement rings or simple facial expressions like happiness or sadness. (That being said, the product video — embedded above — did not do a good job of expressing the possibilities tastefully; it is too cheesy and felt too forced).
Aside from the speed and practical advantages, it’s also just damn cool to send your face to someone else. If the feature turns out to be gimmick, it encourages more people to do video calls in general, even if its just the novelty of how it works. I think it gives a meaningful benefit to picking the video option over audio, though. Even if they decline, you can imply something in those couple of seconds that would never happened otherwise. It’s almost like a transient Snapchat selfie with the opportunity to commit to a full conversation.
It’s a user experience thing that I hope Apple adopts. There are obvious knee-jerk fears of the dangers of letting people put live video onto someone else’s screen without explicit consent. I think these issues are easily mitigated by decent policy design, such as a (default) preference to only ‘enable Knock Knock for people in my contacts’. Careful attention will have to be given to the interface for callers too, especially early on, to explain what is happening — make it plain that the other person can see what you are doing right now even though you can’t see them yet. These are solvable social and technological problems and the benefits are huge, in my view.
Slight confession: I meant to write this post the same day as the event. I ended up being lazy and didn’t get to it until today. I’m glad I waited though, as it let me focus on what I was actually interested in. Almost subconsciously, my mind has concentrated on a couple of specific things.
Out of Google’s entire keynote, I can easily recall just two announcements: the Instant Apps demos and Knock Knock. Everything else is a vague blur or forgotten. Instant Apps is a technical quagmire with a lot of questions about implementation and its utility, so I’m holding off on judgements until its more set in stone … although the premise is intriguing. Duo is more concrete, complete with product videos, and made me genuinely excited. Alas, neither of these announcements have solid release dates, unfortunately. I can’t wait to check out Duo and Knock Knock sometime “later”.