iOS 12 Rumoured To Prioritise Bug Fixes Over New Features

1 February 2018

Axios:

Apple has shaken up its iOS software plans for 2018, delaying some features to next year in an effort to put more focus on addressing performance and quality issues, Axios has learned.

Software head Craig Federighi announced the revised plan to employees at a meeting earlier this month, shortly before he and some top lieutenants headed to a company offsite.

Pushed into 2019 are a number of features including a refresh of the home screen and in-car user interfaces, improvements to core apps like mail and updates to the picture-taking, photo editing and sharing experiences.

On Twitter, I joked that people are probably happy to hear that Apple is focusing on bug fixes and performance enhancements over raw features, until you hear about what has been shelved to make it happen. A new home screen is basically what everyone who says ‘iOS is boring’ wants. I’d like to see it get some attention, not for novelty but because the current rigid grid is of a different era. The fixed row and column layout worked great on 3.5-inch and 4-inch screens but now that device displays are at least 5 inches tall, the screen would benefit from a rethinking. The base behaviour of icons filling from the top left goes against design guidelines to put important interface elements within reach of a user’s fingers.

If you conducted a poll of iPhone customers whether they want a new Springboard or a raft of performance and reliability improvements, no question the result would be in favour of the former. ‘New and shiny’ is addictive and tough to turn down. This is a classic case of the customer not always being right. The current home screen concept is not broken or offensive. For the most part, it still serves its role as a simple, easy to use, switchboard to opening apps.

A perception that iPhones are buggy, slow and unreliable is something that hurts the Apple brand if it persists, affecting all of Apple’s products. This notion, whether it is real or not, has to be addressed. A lot of this falls to marketing to explain and inform the state of play; the iPhone battery throttling saga is an example of where PR messed up.

Some responsibility has to be borne by the engineering group. iOS and macOS have a variety of issues that impact various subsets of the user base. Headlines of Apple software quality being poor are reinforced in a ‘death by a thousand cuts’ manner. Everybody has their own little problems to keep the story kindling on. Is it worse than any other year? Only Apple has the answer to that question but it doesn’t really matter.

An undercurrent of dissatisfaction has built up to a point where it demands proactive attention, amplified by an unfortunate string of software problems seen in the last quarter of 2017. The best way to tackle a perception is to make changes to the products. Fix bugs. Work on small things that have previously been too far down the list to get around to in the face of a release schedule pushing for X new features. Make slow things faster. Make things faster that nobody complained about being slow before.

Collectively, people have short memories. A few months of seeming stability will appease the angry people and quell the negative narratives with changes that are disproportionate to the current outcry. Dedicating time to performance and reliability is a thankless job but it will do the trick.

Long term, I don’t know how Apple prevents this same cycle of happiness, discontent, and anger, from happening again without deeper structural changes in organisation and management. Like, how is software quality affected when iOS 13 reverts back to a release where employees are stretched to deliver new features?

HomePod Launching February 9 Without Multi-Room Features

25 January 2018

Apple:

HomePod, the innovative wireless speaker from Apple, arrives in stores beginning Friday, February 9 and is available to order online this Friday, January 26 in the US, UK and Australia. HomePod will arrive in France and Germany this spring.

The HomePod was presented at WWDC as a Sonos competitor first, Echo competitor second.

Everyone has different accounts of how well Siri works for them, but I think it’s fair to say that it is at least up to par on the fundamental things that people use the current generation of smart assistants for (weathers, timer, music control). I also think Siri beats out Google and Amazon in terms of interacting with smart home accessories using natural language, dependent on the user having HomeKit compatible equipment. No AI assistant is close to good enough yet; AI is a burgeoning field and there is a long roadmap for Siri, Alexa and the innominate Google Assistant to grow. I am hopeful that Apple can be at the leading edge of the space, despite Apple letting Siri languish as a largely unchanged feature for years since its original 2011 debut.

Pre-empting the reviews, the biggest barrier to HomePod competing with the other smart cylinders is simply the prohibitive price. If you just want a faceless assistant in your living room, the HomePod isn’t really a good option because you are paying for features (speaker quality) that you probably don’t care about.

To justify the higher price, the music side of the HomePod story has to be very good, close to excellent even. Based on whispers, I think it’s going to impress normal consumers and audiophiles with industry-leading sound in its form factor class. Whether the appeal of premium audio is too niche is another matter. I also prefer how the HomePod looks as an object compared to Sonos’ latest speakers, the Echo cylinders and Google’s Home Max. The best sound in the prettiest package. That’s a pretty good sales pitch.

However, there’s more to competing with Sonos than having the best standalone compact speaker. A big draw of Sonos is that you can stream music through your whole home with multiple speakers in different rooms all synced up to the same audio stream. The HomePod was intended to have exactly these capabilities; multi-room playback and the ability to use two HomePods as a stereo pair. Both of these features are not shipping at launch. This undermines the product substantially as you can’t reasonably compete in the modern home audio market without multi-room synced music, I’d bet the average Sonos owner has more than one speaker for instance. Apple knows that these are critical elements of the product’s appeal; you don’t have to scroll that far down the HomePod page to see them advertised, albeit with ‘coming later this year’ banners.

These drawbacks dilute the original concept that Apple laid out at WWDC and the 1.0 will not fulfil the vision of a true smart Sonos replacement. I’m sure there’s an interesting behind-the-scenes story on why AirPlay 2 has caused them so many internal setbacks. It’s embarrassing to announce a product, then delay it, then release with a stripped down offering of features from what they originally sold people on.

My guess is that when Apple made the decision to delay HomePod into early 2018, they thought that the multi-room AirPlay 2 stack would be ready to go with just a few more weeks of work. It has since transpired that it is actually going to take months to finish it up, and a product manager made the call to ship the HomePod as is, without these features. Note that if you are only buying one unit, you aren’t actually affected by this. Ideally, multi-room will be ready for the holiday season and Apple can encourage everyone that owns one to take advantage of it.

All that being said, I am glad Apple is making this product and I am optimistic it will be a beloved item for people that stretch their wallets to buy it. Whether it will be a commercial business success, with millions of units sold, is a different matter entirely. HomePod straddles the line between a standalone product and ecosystem accessory.

Apple Will Let Users Disable Throttling On iPhones With Degraded Batteries

18 January 2018

9to5Mac:

Apple had already said that a future iOS update will give users more insight into the state of their battery. In an interview with ABC News, Tim Cook was asked for his take on Apple slowing down iPhones with degraded batteries. He revealed that the developer beta including these features will be released next month, with a public release to follow after.

Moreover, he says that this forthcoming update will give users the option to disable the throttling to maintain normal CPU performance but be at risk of unexpected shutdowns.

In the long run, I hope Apple can ship iPhones with batteries that are able to deliver peak energy loads for many years. Battery tech is slow, who knows how long this will take to come to market, but that’s the obvious ultimate solution even if it’s a far-off goal.

The short-term response from Apple to apologise for poor communication, promise to add better battery health statistics to iOS and discount battery replacements felt like a solid counter to the criticism and seemingly assuaged most of the people who were upset — including myself.

I struggle to see the motivation for Apple to go further and make the behaviour optional. The existence of this setting, which will be available in a iOS developer beta released next month, is a contradiction of what Apple said in the public apology letter. The letter intelligently argues that the throttling was put in place to improve the user experience. With that context taken as truth, this revelation from Cook is essentially an announcement of a feature that users can enable to make their experience worse.

Apple has made a name for themselves as the company that makes hard decisions and believes in them. That philosophy is arguably the reason they have been so successful as a brand. Historically, Apple has made controversial design choices and backed them with conviction in the face of public outcry. Headphone jacks, optical drives, Adobe Flash. It bears the brunt of the criticism because the company believed that they would ultimately be right. (And they were.)

In this instance, making the throttling behaviour optional feels like the easy way out, not the best way. It will certainly stop the lawsuits dead in their tracks and silence the vocal minority, but is it the best move for the iPhone as a product? I’m not convinced. Every new setting comes at a cost. Apple is putting the burden on customers to make a choice that I don’t think people should have to worry about.

Sharing Links From iOS Twitter Appends Garbage To The URL

10 January 2018

There’s been a change to the official Twitter app in the last few months that affects anyone who tries to share a URL from inside the app. Using the standard activity view controller, recognised as the system share sheet, the Twitter app surreptitiously appends some query string parameters to the original URL.

The query parameters are effectively garbage to end-users that have no utility to anyone but Twitter itself. There’s honestly not much Twitter can do with it either apart from some very coarse tracking of user behaviour.

If the user commits to sharing the URL without amending the link, Twitter can see that its iOS app was the origin of the engagement if that URL is posted publicly.

Query parameters are just plain text. They can’t snoop in iMessage conversations or your private email. If someone re-shares the link to a public website or social network post, then theoretically Twitter knows that at some point that URL was made from its iOS app.

Here’s an example of an URL that Twitter mangled when I shared my profile in the Twitter app to the Messages app.

http://twitter.com/bzamayo?ref_src=twcamp%5Eshare%7Ctwsrc%5Eios%7Ctwgr%5Ecom.apple.UIKit.activity.Message

Focusing on the query string only, the bit after the question mark, the weird percent signs look scary but that’s just an artefact of forming a valid URL. The entities can be easily decoded to give a relatively human readable result:

ref_src=twcamp^share|twsrc^ios|twgr^com.apple.UIKit.activity.Message

It’s not hard to see here what information Twitter is trying to attach to the original URL; the action came from a share sheet on iOS, and it was shared to the Messages app extension. The bit after the ‘twgr^’ is the activity type of the particular share action selected.

Whilst there isn’t a directory of activity types to lookup, it’s not hard to track them down as they start with the app bundle identifier. Apple provides a bunch of built-in ones with the com.apple.UIKit.activity prefix and third party apps tend to use obvious names. If you share to Bear, the string will literally contain the words ‘Bear-iPhone-Sharing-Extension’. One of the more obtuse ones I’ve seen is com.tinyspeck.chatlyio.share … but a quick Google search reveals that it represents the Slack sharing extension.

The fact that the last component changes dynamically based on what action the user selects feels invasive if you don’t know what’s going on at the API level. Users are told that the activity share sheet is managed by Apple so instinctively it feels like being able to grab the activity type is nefarious.

In reality, this is very easily achieved. As part of the activity provider API, the system asks for content to share for each sharing extension the user has installed. The Apple framework openly passes the activity type to the app. Twitter simply takes the base URL it wants to share and appends the ‘garbage’ before returning.

If you are interested in the technical implementation, here’s a working code snippet. Even if you are not a programmer, the brevity highlights that there isn’t anything fancy going on here.

The important thing to note here is that the mechanism is innocuous and uses valid APIs provided by Apple. Twitter is not exploiting private APIs to achieve this. A cursory look at the app review guidelines suggests to me there are no grounds for Apple to scold Twitter (or any other app) for doing it.

My personal stance is that this is annoying but does not violate user privacy. Importantly, Twitter cannot append arbitrary information to its URLs system-wide; it is confined to cases where users share something from inside the Twitter app itself. I don’t really see a justification for Apple to amend the guidelines to disallow it. I just take it as another reason not to use the official Twitter app.

Apple Investors Push For Better iOS Parental Controls

8 January 2018

Think Differently About Kids:

Based on the best available research, enhancing mobile device software so that parents (if they wish) can implement changes so that their child or teenager is not being handed the same phone as a 40-year old, just as most products are made safer for younger users. For example, the initial setup menu could be expanded so that, just as users choose a language and time zone, parents can enter the age of the user and be given age-appropriate setup options based on the best available research including limiting screen time, restricting use to certain hours, reducing the available number of social media sites, setting up parental monitoring, and many other options.

There’s definitely a debate about how much parents should oversee their child’s usage of technology and in what form, whether guidance should be through advisory discussions, active enforcement with software restrictions, or a combination of both. I do not want to speak for the validity of the research cited in the linked open letter.

On Android, it is possible to download a parental controls app. You have to sign your privacy away to a third-party service that you can’t really trust but it is possible. The locked-down sandboxed security model of iOS means an aftermarket app cannot get low-level enough to override app launches and stuff like that.

This means the onus is on Apple to provide functionality like allowing access to apps during specific time windows or keeping an activity log of open apps so parents can see what their child has been up to.

I think Apple should offer these features and leave it up to the discretion of individual families as to how they are used, if at all. The fact that iOS has a Restrictions feature already says to me that Apple does not hold a principled stance against parental controls, just the current offering in iOS is lacking.

I would expect that any parental guidance features introduced in a future iOS would make it abundantly clear when they are used. Apple would not let parents secretly spy on their kids. Apple already does explicit signposting for phones that are being managed by an enterprise deployment system which has the potential for administrators to track the device location and supervise usage. Enforced Parental Controls would get similar labels.

Apple's Statement On iPhone Battery Performance Throttling

2 January 2018

Apple:

Apple is reducing the price of an out-of-warranty iPhone battery replacement by $50 — from $79 to $29 — for anyone with an iPhone 6 or later whose battery needs to be replaced, available worldwide through December 2018. Details will be provided soon on apple.com.

Early in 2018, we will issue an iOS software update with new features that give users more visibility into the health of their iPhone’s battery, so they can see for themselves if its condition is affecting performance.

The initial press comment felt rushed and incomplete, the public statement that has been posted on Apple.com is a pretty good response to the furore. Promising the discount only through to the end of 2018 is weak, though.

If Apple wants to consider iPhone batteries as consumable, I don’t want them to profit off of the battery repairs. $29 is a palatable service cost to bear after two years of iPhone ownership, $79 stings. If their aim is to maximise the longevity of their devices, they should not have conflicts in incentives with making money from repairs down the road. I do not want Apple to run a razor and blades business model, even inadvertently.

I’m interested to see exactly what battery statistics Apple will surface in the software update due ‘early in 2018’. When this update ships, I expect another wave of complaints from people as everyone will be able to see for themselves how degraded their own iPhone battery is. Regardless of the public reaction, transparency is critically important and what caused the fiasco to flare up so badly in the first place.

I would also like to see Apple release estimated numbers on how long customers should expect to be able to use their iPhone at full performance. This support document gives a rough idea about what effects the throttling will have on the user experience but I haven’t seen Apple say when customers should expect their iPhone experience to become less optimal.

Dirty Coding Tricks In Games

18 December 2017

Gamasutra:

The game launched fine off the 512mb card, but we were getting periodic, inconsistent system lock-ups when attempting to launch off of the 256mb card. We wracked our brains for a fix, but ultimately decided that our coding efforts would be best spent making the game as good as possible instead of chasing down some ghost in the machine.

So we shoved a 20mb music file into the game data, pushing the total file size beyond 260mb. This totally precluded us from having to involve the 256mb memory card in the submission process. It was a good game that we shipped on time. Microsoft and our customers were none the wiser.

This article is packed with examples of the unimaginable hurdles faced when shipping real software. Whilst the general motivations of these anecdotes are tightly coupled to a bygone era, when software had to be burned onto physical media with immutable finality, the underlying problems of unexpected roadblocks, edge-case gotchas, and deadlines are as prevalent as ever in the industry.

Games, apps, websites. It’s all just software that is becoming ever more complicated and sophisticated. The normal human tendency is to treat an app as a finished, complete, thing. Behind the scenes, there’s a lot of glue holding the walls together of a constantly-changing structure with developers doing incredible gymnastics of engineering to make it all ‘work’.

Jony Ive Returns To Design Team Management

9 December 2017

Bloomberg:

Ive, 50, was named Apple’s chief design officer in 2015 and subsequently handed off some day-to-day management responsibility while the iPhone maker was building its new Apple Park headquarters in Cupertino, California. “With the completion of Apple Park, Apple’s design leaders and teams are again reporting directly to Jony Ive, who remains focused purely on design,” Amy Bessette, a company spokeswoman, said Friday in a statement.

It’s hard to parse what this means because nobody on the outside really has a good idea of what the title change two years ago meant. Jony Ive’s elevation to Chief Design Officer felt like the first steps to his retirement with Howarth and Dye taking up the posts of lead hardware and software design.

Yet, Apple never tipped its hand that Ive was on the way out. I expected Howarth and Dye to slowly start appearing in keynote presentation videos, in interviews, and new product marketing. Ive would slowly fade from relevance in Apple’s public relations before he left for real. That simply didn’t happen. If anything, Ive became even more intertwined into Apple’s public image. He has done countless interviews and photo shoots in the intervening years.

Now, the managerial changes have essentially been reversed to what they were pre-2015. Howarth and Dye have been meekly removed from the Apple Leadership page and Apple told Bloomberg in a statement that they report directly to Ive once more. Was this the plan all along, or was Ive originally planning to retire after Apple Park was done? With no evidence to consider, I tend to lean towards the latter explanation as otherwise they wouldn’t have bothered announcing a role switch-up in the first place.

Apple HomePod Prototypes

25 November 2017

Bloomberg:

Once Apple decided to use beam forming, designers experimented with various shapes. One prototype looked like a flat panel with a mesh screen on the front. Another was about five times as tall as today’s 7-inch HomePod and packed in dozens of speakers. At one point Apple considered selling the device under the Beats brand but the idea was abandoned. There was discussion of adding a second woofer and including mid-range speakers to boost the sound quality even further. Designers also mulled producing the speaker in several colors but eventually decided on black and white. Over the years a closet filled up with prototypes, a kind of mini museum dedicated to the HomePod.

I always think back to the Apple-Samsung patent trials which included images of early iPhone prototypes as submitted evidence. There are some truly wacky designs in there. Nobody would believe that an angular phone would have been even been considered by the likes of Jony Ive and Steve Jobs without these court disclosures.

Apple surely experimented with a much larger smart speaker chassis, probably to identify the best balance of audio quality and physical elegance (size). I would be shocked if the HomePod line gets larger before it gets smaller. The average person will already struggle to differentiate the superior sound signature of the HomePod compared to rival products. Going larger would merely target a segment even more niche than the market it already appeals to.

HomePod Release Delayed

18 November 2017

9to5Mac:

Apple shared this statement with 9to5Mac confirming the delay:

“We can’t wait for people to experience HomePod, Apple’s breakthrough wireless speaker for the home, but we need a little more time before it’s ready for our customers. We’ll start shipping in the US, UK, and Australia in early 2018.”

When Apple announced the iMac Pro at WWDC in June, it made sense to do so despite the product not shipping until year’s end. Apple satiated the Mac Pro user base with a pledge to update it and the unveiling of the iMac Pro, which appeals to much of the same crowd. Even if you don’t want to buy one, it’s an impressive offering that proves Apple is still catering to professionals and should give some confidence that Apple will deliver on the modular Mac Pro promises with a product of equal calibre.

When Apple announced the HomePod at WWDC in June, I couldn’t understand why they chose to show it so far in advance. HomePod doesn’t have an SDK that developers could learn about, nor did it serve as a platform for a new wave of Siri features. Moreover, Apple didn’t need to scrape the barrel to find stuff to talk about. The WWDC keynote was jam-packed with hardware and software announcements. HomePod could have been cut and it would have still been a very impressive event.

I care less about the reason for the delay (it’s probably something boring) and more about why Apple felt pressured to announce their smart speaker prematurely in the first place. I’d love to hear the internal justification. In my view, HomePod could have been unveiled at the September event with no downside.

iPhone X Home Indicator Tinting

11 November 2017

In a post for 9to5Mac, I brought up an area where the iPhone X currently does the wrong thing, at least in my opinion regarding the aesthetics of the home indicator. In an app with mostly dark content, like iTunes Store or the Watch app, the home indicator is coloured stark white.

This is not very nice to look at, the subtle greys and blacks of the application clash with the bright white rounded rectangle. It distracts the eye. You can see that in the screenshot above, on the left. If you are reading this post on an LCD screen, consider that the problem is amplified further on an actual iPhone X with its high-contrast OLED screen.

In the right-side mockup, I color-matched the home indicator with the text colour of the tab-bar items. This is a simple but very effective change. From a technical perspective, UIKit could easily grab that value from the appearance proxy — no additional API surface needed — for a much more pleasing result. The home indicator is unmistakably still there, it just integrates neatly with the app chrome.

Ideally, Apple would expose a dedicated API that lets each app decide what colour the home indicator should be in the current context. It already allows developers to set whether the home indicator should be temporarily hidden, to avoid disrupting full-screen experiences like watching video.

The automatic algorithm does a decent job at guessing the best colour for the indicator at any particular moment (it’s actually a luminosity-blended texture, not one single colour), but it would be better if each app could provide its own suggestion. The suggested colour would only be a preferred tint; the system could choose to override the developers’ wishes if it deemed it necessary.

Right now, the indicator is only ever depicted in shades of grey. That doesn’t need to change with the API extension proposed above; in fact it’s probably enough if the API simply let the app say what the maximum brightness of the indicator should be.

Apple Gives Early iPhone X Access To YouTubers

1 November 2017

I think the Monday YouTube iPhone X videos were a shambles. Not because they were YouTubers, but because Apple didn’t give them sufficient access to create interesting and engaging videos.

Every Apple-sanctioned hands-on posted on Monday was the exact same, incredibly generic, rough overview of Animoji, Face ID and the bigger screen. Each video was shot in the same New York City location and felt incredibly scripted by the invisible hand of Apple PR, with restrictive guidelines on what they could talk about and limited time to handle (and shoot) the product.

With these constraints imposed, it’s no surprise that the videos are homogenous and drab retelling of certain features. I’m sure Apple PR loved it as a way to advertise their iPhone X talking points to a wide base of people for free, but as a collection it didn’t work.

What allows the tech press to create compelling content for Apple products is they have the freedom to take a review unit home with them, in their own unique environments and situations. Apple threw away the creative diversity of YouTube with how they orchestrated the Monday early access previews. I think it’s cool they are reaching out to more YouTube personalities, specifically small to medium size channels, but on this occasion it fell flat. Apple stacked the deck against them, they didn’t have the freedom to make captivating, immersive, videos.

You know who made the best YouTube hands on? Brooke Peterson. Her video had a story, it had a flow, it had a cool setting. The grittiness made it feel like real life, which is ironic given its illegitimacy. With the Apple-sanctioned videos, it was impossible to escape the artificial studio lighting of the demo room.

HomePod SiriKit Integration

31 October 2017

SiriKit, Apple Developer:

With the intelligence of Siri, users control HomePod through natural voice interaction and can conveniently access iOS apps that support SiriKit Messaging, Lists, and Notes. Siri recognizes SiriKit requests made on HomePod and sends those requests to the user’s iOS device for processing. To prepare your app, make sure that your SiriKit integration is up to date and that you’ve adopted all of the appropriate intents.

Here’s the flow. The HomePod listens for a request from a user. If it recognises it as a request meant for a third party app, it sends the necessary data to a nearby iPhone/iPad with the app installed. The iOS device sends the response back to the HomePod, which speaks the reply. It’s similar to how WatchKit 1.0 worked where the connected phone did all of the heavy-lifting for third-party Watch apps.

Requiring an at-home iOS device to handle a third-party app request isn’t much of a limitation, at all. I can’t think of a situation when I would be using the HomePod and not have the iPhone somewhere in the house. There are benefits to a satellite ‘remote control’ approach too: developers don’t have to do anything special to support HomePod, all service configuration will naturally mirror the user’s phone apps, and there’s no need for users to manage another list of installed apps.

HomePod is lacking in capability in key areas, though. The scope of SiriKit is small on iOS and it’s even smaller for the HomePod integration; limited to third-party notes, lists and messaging apps. Some of the SiriKit domains don’t make sense because they require mobility (workouts) or a display (visual codes) but there are others that could be useful if the technical infrastructure could support them. Hailing an Uber from your living room smart speaker is a first-world convenience that HomePod cannot serve.

It would also be nice if Apple opened up new SiriKit domains to coincide with the HomePod launch, to give it more functionality. Third-party podcast and music apps are notable omissions — relevant to HomePod, iPhone and iPad — but there is no news on this front whatsoever.

Most significantly for HomePod is how it behaves as a device shared by multiple people. Or more accurately, how it seemingly ignores any such attempt to be a shared home product at the software level. It seems like one user will sign into the HomePod with Apple ID and iCloud, and all Siri features will be funnelled through that one account. This applies to first-party and third party services.

If you look at the HomePod solely as an Apple Music jukebox, even that has data that is unique to different members in the family: personal playlists and mixes. The first version of the HomePod software appears to have no support for multi-user accounts at all. Not good.

Tim Culpan On iPhone X Production Problems

23 October 2017

Bloomberg:

So when Cupertino decided to go with OLED, it must have known that supply would be tight and the company would be relying on nemesis Samsung. Perhaps Cook and Williams were OK with this and figured Samsung would ramp up fast enough to ensure OLEDs for all, or maybe they thought alternative suppliers would come on stream.

The biggest bottlenecks in the iPhone X supply chain are not directly related to the OLED screen at all. The OLED panel is expensive (five times more costly to Apple than an iPhone 8 Plus LCD) and only available from a single supplier, but that screen can be produced at the required ‘Apple scale’ by Samsung.

It is true that adopting an (almost) edge-to-edge OLED screen had implications on product specifications; namely the need for Face ID as a replacement to Touch ID. The depth-sensing camera system is one of the parts that caused holdups in ramping iPhone X units.

Regardless, Apple handled the situation the best it could. They needed to bring out a revolutionary high-end phone this year; the 6-series chassis with forehead and chin was showing its age. I’m sure they knew about the potential pitfalls in the supply chain that could arise, natural for any product like iPhone X that uses new internal components at incredible volumes, but they simply had to take that on the chin or risk falling behind in the marketplace.

Instead of the X, imagine that Apple released a much more conservative phone as their 2017 flagship iPhone. It may well have been readily stocked in stores, but who cares if nobody is excited to queue up (figuratively) and buy it. It would also have been incredibly punishing to Apple’s brand reputation; the press would publish negative stories about Apple’s lack of vision and innovation in droves.

It’s just an untenable proposition. They needed an impressive high-end device, at any cost, and the iPhone X is exactly that. Millions of people are about to buy it on Friday. Millions more will be gasping to buy it as soon as they can. The production issues and lack of supply is frustrating … however it’s only a short-term concern. KGI’s Ming-Chi Kuo believes iPhone X production will ramp up to full capacity sometime in November.

Facing the choice between launching a radical new design with a few months of supply shortages, or heralding a ‘boring’ iPhone for another year, Apple clearly made the right decision.

Google Pixel 2 Cameras

18 October 2017

The Verge:

Google attempts to do the same thing with a single lens that other cameras do with two: detect depth data and blur the background. Most phones do this by combining computer recognition with a little bit of depth data — and Google is no different in that regard.

What is different is that Google is much better at computer recognition, and it’s gathering depth data from the dual-pixel system, where the half-pixels are literally less than a micron apart on a single image sensor. Google’s proficiency at machine learning means portrait images from the Pixel 2 do a better job of cropping around hair than either the iPhone 8 or the Note 8.

The Pixel 2 cameras are very impressive. The sample photos are very sharp and the automatic HDR+ effects make most of the images look hyper-real, probably not the most accurate depiction of the real-life scene but they look good.

It’s also fascinating to see others do Portrait mode features with a single lens. As everything in technology follows a path towards miniaturisation, the Apple approach — a dual camera system — will eventually be obsolete. One day. As it stands, the duo component enables another camera feature that no single lens phone offers: optical zoom.

The 2x zoom of the telephoto camera is a huge feature. In fact, when the iPhone 7 Plus first launched, the only function the dual cameras served was higher-quality zooming. The depth effect Portrait camera didn’t ship until a month after the phone was released.

The ability to zoom without digital cropping is a big deal. It justifies having two ugly holes poking about the back of the phone, rather than one. It doesn’t matter that Apple can ‘only’ achieve Portrait mode by using two lenses until Google (or another prominent phone manufacturer) can do optical zoom with a single lens.

Developing For Apple Watch

9 October 2017

The Watch is cool, making apps for it not so much. WatchKit doesn’t give the developer much freedom when it comes to design. The interface is composed of pre-compiled layouts and generic UI elements, with limited customisation. It’s a rich templating engine.

Behaviours and interactions are only achievable for third-party developers on watchOS if someone at Apple has already invented them and exposed a checkbox in Interface Builder. Any dynamic transition or animation in a WatchKit app is basically impossible.

For example, you can transition a table row to a new appearance if that row changes height because WatchKit happens to support that. But if you want to cross-fade the contents of a row that has the same height before and after, you are out of luck.

Want to create a social feed with post summaries and photos that use a parallax effect to subtly shift perspective as you move through a story? No joy. WatchKit does not provide realtime scroll events to apps so there is no way to react to a change in view offset. Even if it did, the lack of freeform layout effectively makes it unfeasible.

Let’s pretend Flickr wants to make a watch app that showcases the best images of the week uploaded to the platform, with a wall of thumbnails that fills the watch display. Users could use the Digital Crown to zoom in and focus on a single photo in fullscreen. You could perhaps favourite it to find it again later on a Mac or iPad. Seems like an interesting app? Literally only in your dreams with WatchKit. (In my dreams, Flickr is also a thriving photo sharing site.)

You have to fight the system at every turn to do anything non-standard … and in most cases it still isn’t achievable. There are a few ‘advanced’ interactions that Apple has made special affordances for developers to take advantage of, but I can probably count the number of them on one hand. There’s a reason why all third-party watch apps look uninspiring and generic; there’s just not much you can do to make your own app stand out.

What really puts salt in the wound is that Apple has access to a completely different Apple Watch technology stack and doesn’t hesitate to take advantage of it in its own apps. In thinking what I wanted to say for this article, I started flicking through the honeycomb and trying to find a stock app that could be visually replicated by a third party. I really, really, struggled.

The examples I wrote up above were not invented arbitrarily. The parallax story feed is literally describing the Apple News app. The photo wall describes the interactions of the Photos app. The update-in-place custom transitions are used all over the system — I was specifically thinking of the Heart Rate app which dynamically updates the current heart rate readout using a rotary-dial text animation.

The kind of things Apple doesn’t let you do are critical things that makeup a rich and responsive application. These things should not be passed off as little niceties, they serve a significant role in making an app feel alive and more enjoyable to use. Let’s drive this home with more examples of stock apps doing things third-party developers can’t.

Messages uses a zooming effect for bubbles as you scroll through the transcript, and swipe actions for the summary cells on the main screen. Calendar pushes the title bar alongside the list of events. Music has a beautiful transition for scrolling between albums on the home screen, it feels like you are flicking between jewel case CDs on a shelf. Activity relies on a rich graphical representation of progress, the rings, with independent animations for each segment and several custom live-updating animated charts hosted inside table cells that scale up as the user scrolls.

Even something mundane like the contact list in the Phone app shows hundreds of rows with a large address book, far more cells than WatchKit can manage, and presents a custom letter-by-letter scrubbing interface when you scroll the Digital Crown quickly. Tap on a contact photo and it smoothly expands to fill the Watch display. These interactions are so basic I had to double-check I wasn’t crazy, but sadly it is true these things are all unavailable to external developers.

After looking at every app on my watch, I think three Apple apps could be implemented by an outsider: Alarms, Settings and Stocks. That’s it. (Alarms and Settings are very plain apps mostly consisting of standard table rows. Stocks has a dynamic behaviour where you can scroll/swipe through the detail views like vertical pages. This interaction is one of the few things Apple has packaged up for WatchKit developers to access.)

Apple engineers are using a completely different technology stack to create the system apps. They get to write real iOS apps with a watchOS appearance theme, essentially. Third-party developers have to use WatchKit — a completely separate abstracted framework that exposes only high-level interface objects (whilst creating UIKit components under the covers).

The current WatchKit API leaves no room for invention. iOS innovations like pull-to-refresh came about because the iPhone OS UI frameworks were flexible enough to let developers and designers run wild with their own ideas, if they wanted to. Some of these custom controls worked so well Apple later incorporated them as standard components in UIKit. That free reign creativity simply can’t happen on the watch at the moment. Apple defines what is possible.

I hope this adequately conveys the frustration I had developing Visual Codes for Apple Watch. There’s no freedom to make what you are imagining in your head which means, for me, there is almost no fun in making it either. I did it because I had to.

Unlike iOS, making a WatchKit app is like a chore where you have to do a set number of things in a set number of ways. And that’s just an exposition of the UI side. I haven’t even covered the restrictions on what features are actually implementable on current watchOS. Those functional limitations preclude many categories of Watch apps from being made at all.