Apple Music, Despite The New Icon, Is Not A New App

5 July 2015

Cezary Wojcik:

There were options to “Show Apple Music” and use the “iCloud Music Library,” both of which were unchecked, for some reason. I checked the options, and after only a few server errors when trying to log in to my iTunes account, my copy of iTunes finally showed the “For You,” and “New” tabs that I had seen screenshots of.

My playlist finally showed up in the “Playlists” section. The song that I had spent so long trying to delete was nowhere to be seen in the “My Music” tab, though it still persists on my iPad. I started thinking about how I’m supposed to add songs to a playlist. In Spotify, the list of playlists constantly stays on the left so that I can easily search for songs and simply drag them into the playlists. Wondering what would happen, I searched for “Animals as Leaders.” Turns out that the search pop-up has an option to search either through your library or through “Apple Music.” It defaults to your library every time you click the search box, so you have to take an extra step to click “Apple Music” in the actually pretty ugly search dialog every time you want to search. Sure, whatever, just get me to the songs.

Many, granted not all, of the UI problems associated with Apple Music are because Apple decided to integrate the new features into their existing music apps. On iOS, they extended the interface for the existing Music app. On the Mac, they added even more tabs and settings to the bloated iTunes app.

I think Apple would have been better served to develop its streaming services as an independent pieces of software. In the same way that enabling iCloud Photo Library is a clear line in the sand for how you manage photos on iOS, Apple Music should be of a similar separated experience. Right now, a lot of compromises are made to make it so both the new and old experiences can coexist.

To make this happen, they added a whole load more tabs to the tab bar to add ‘For You’, ‘New’ and ‘Connect’. This filled up the number of available tab slots so they had to find alternate ways of representing music filtering. For instance, they added a weird popup menu to switch between Albums, Artists and Songs. In the old app, these views were instantly available in the tab bar with one press. It doesn’t look like a hamburger navigation stack but that’s essentially what it is — a bundle of hidden actions hidden behind a button.

Similarly, every table cell has a ••• button now which brings up a contextual menu. Unfortunately, what’s contained in this menu is pretty random and illogical. The list vary from context to context so you can’t develop a muscle memory for tapping on specific things, often do not include seemingly obvious common actions and sometimes duplicate things already on the screen. There are also just too many actions shown at one time, lacking focus.

Distinguishing the streaming Apple Music from the current iTunes-backed music library with two different apps would help a lot. The Music app could still prompt people to upgrade to the subscription service but the apps themselves would be distinct. A ‘Add To Library’ option in Apple Music would treat the song as if it came from iTunes and show up in the local library as iTunes In The Cloud music does today. Naming would naturally need rethinking: having two apps simply named ‘Music’ and ‘Apple Music’ would be confusing.

Obviously, it is inelegant to have two apps but Apple’s current half-and-half attempt causes other problems. A lesser of two evils, if you will. Maybe there is a way to incorporate everything into one app nicely — necessarily dropping the tab bar for stack-based navigation — but two apps surely simplifies the mental model of ‘what is my music’ and ‘what is streaming’. Having two apps reeks of ugliness but observe how Apple separated out the Music app from the iTunes Store since iOS began. There are advantages to not stuffing everything into one.

Now, let’s consider the Mac. More accurately, how Apple messed up in adding streaming features to the iTunes app (Mac and PC). Imagine instead if Apple created a separate app that handled all the streaming and recommendation features of Apple Music. The need for cross-platform compatibility doesn’t really matter — Apple could make the app in the same way they create multi-platform iTunes. This clarifies a lot of the required UI and it could be tailored for the streaming music experience and intercommunication with the download-centric iTunes Store could be completely ignored. Everything syncs over iCloud anyway so there is no need for iTunes’ syncing infrastructure to be ported.

It sounds messy but this is exactly the same setup as other streaming services: you have both the Music app and the Spotify app. The same is true with Beats Music (before Apple acquired it obviously).

The transition period is what makes this especially hard. With the move from iPhoto to Photos, Apple basically says all or nothing. You either move forward to the new app or you stay in the past and iPhoto stops working as libraries become out of sync. The same attitude can’t really be held against this iTunes : Music initiative as Apple can’t really ask its customers to ‘upgrade’ to a new world where they have to pay monthly. There was no payment associated with the iPhoto to Photos migration.

Apple tried to put Apple Music into the existing iTunes interface and did an okay job. I am sure there are better unified designs possible than what Apple produced to integrate both the old and new worlds but clearly it is hard. Hard on creativity and hard on engineering. Accepting the disadvantages, building Apple Music as an independent app may have been a better strategic move.

Twitterrific Adds Face Detection To Improve Image Crops

29 June 2015

The Iconfactory:

It means that as Twitterrific displays media thumbnails in the timeline (pictures, videos, etc), the app tries to detect faces and frame the thumbnail so faces are always showing. In short, if Twitterrific sees a face in a tweet, it tries to make sure you see it too!

The effect when scanning through your list of tweets in the timeline can be dramatic. Previously Twitterrific always framed thumbnails on the center of images, but many times people’s faces aren’t in the middle, especially on portrait shots. Check out these before and after comparison screen shots to see the difference facial framing makes in the timeline:

Apple includes a load of APIs as standard in the SDK, ranging from language analysis, physics engines to image facial feature recognition. As a developer, you are always looking at ways to develop new apps and features by applying these — rather niche but rich — frameworks. Often, this means creating an entirely new app off the back of such a framework (Pedometer++ is a great example) but Twitterrific have cleverly thought up a way of using Core Image detectors to enhance the experience of their existing app.

They use Apple’s face detection APIs to better frame cropped images in the timeline. Is it life changing? No. Is it nice? Yes. Ideally, the feature is invisible. This is why these frameworks exist. The Iconfactory can piggyback off the work done by Apple (which originally developed the face detectors for its own Camera app) and deliver improvements like this to users with relatively little effort than if they had to develop the face detection engines in house.

Aftermarket Gold Plated Apple Watches

25 June 2015

9to5Mac:

First, here’s a quick recap of how WatchPlate, the service we first discovered in May, transforms ordinary Apple Watches into beautiful but affordable Edition-like timepieces. You buy a stainless steel Apple Watch (the gold plating doesn’t work with aluminum Sport models) and ship it to WatchPlate. For a flat rate of $399, the company plates your Apple Watch and band in 18k yellow or rose gold and ships it back within roughly 3-4 business days. Shipping is covered and you can pay a little more to have additional bands plated.

The result is super impressive, especially considering the complexity of the Apple Watch being a miniature computer aside from metal casing.

You’d be able to tell from the weight (and the insignia on the back1) that isn’t a real Edition but it sure looks like one. If Apple was run by Samsung designers, there would be officially gold-plated Apple Watches already. Ive would never allow Apple to sell such a product with materials so untrue to themselves (steel pretending to be gold), even though I reckon it would sell well.

1 Both weight and inscription would be imperceptible to others when worn though.

Adding VoiceOver To Apps

21 June 2015

The Iconfactory:

We’re not in this business just to make money: all of us at the Iconfactory hope that our products will make people’s lives better. We’ve worked hard to make Twitterrific work well with the accessibility features in iOS. Hearing that these efforts make things easier for customers with disabilities is rewarding beyond words. (Listen to the podcast file in that last link to get a great idea of what life is like for a VoiceOver user.)

But now there’s another incentive for thinking about accessibility: helping others also helps your downloads.

This is a bit disingenuous. The sales boost from this Apple feature is almost certainly temporary and there are no guarantees that Apple will do such a promotion again. Also note that most of the apps in the store with VoiceOver support were not featured in this lis and received no benefit at all from their investment here.

On raw monetary grounds, investing in VoiceOver support is hard to justify on a raw cost-benefit analysis. I haven’t seen it materially affect sales under normal circumstances. Luckily, very little work gets you very far with VoiceOver conformance. It’s a nice thing to do that makes your apps accessible to an even larger base of people. As a developer, taking some time to do this is incredibly rewarding in ways other than bottom line profit. I love getting feedback from blind users of my apps. It makes me feel good that I’ve helped someone. That’s what makes adding VoiceOver worthwhile.

iPad Mini No Longer Sold On Apple.com

19 June 2015

9to5Mac:

The original iPad mini has quietly disappeared from Apple’s web site, and is no longer available to purchase new from the Apple Store. Introduced in October 2012, the first iPad mini established the industrial design that was subsequently used in the iPad mini 2 and iPad mini 3, as well as the larger but otherwise nearly identical iPad Air and iPad Air 2. Apple notably continued to sell the 16GB iPad mini as an entry-level model alongside two of its sequels, dropping its price to $299 in October 2013, then $249 in October 2014.

It took five years but finally every device in the currently sold iOS lineup have Retina displays. It also signals the end of the A5 chip but developers can’t get too excited to end support as iOS 9 will run on the original iPad mini. The fifth-generation iPod touch also includes an A5 … I think that will disappear soon. My guess is that it will be replaced — in contrast to the line just being killed — by a new non-cellular iPhone variant with a 4.7 inch display.

The Touch is a nice gateway drug into the iOS world so I don’t think Apple can drop the product entirely. The only way it could get completely axed, I believe, is if Apple can sell an iPhone at iPod touch price points. We aren’t there yet.

Making The Original Crash Bandicoot

18 June 2015

Quora:

By far the best part in retrospect—and the worst part at the time—was getting the core C/assembly code to fit. We were literally days away from the drop-dead date for the “gold master“—our last chance to make the holiday season before we lost the entire year—and we were randomly permuting C code into semantically identical but syntactically different manifestations to get the compiler to produce code that was 200, 125, 50, then 8 bytes smaller. Permuting as in, “for (i=0; i < x; i++)“—what happens if we rewrite that as a while loop using a variable we already used above for something else? This was after we’d already exhausted the usual tricks of, e.g., stuffing data into the lower two bits of pointers (which only works because all addresses on the R3000 were 4-byte aligned).

Ultimately Crash fit into the PS1’s memory with 4 bytes to spare. Yes, 4 bytes out of 2097152. Good times.

These anecdotes are so cool. 2097152 bytes is nothing but back then it was the entire memory space available for a game. It’s hard to conceive how restrictive this is given today’s abundance of available memory and storage. 2097152 bytes is equivalent to 2.1 megabytes. Can you think of any modern day media that fits in 2 megabytes?

iOS 9 Picture In Picture Takes Desktop Concepts To Mobile And Does It Better

12 June 2015

iPad multitasking was sorely needed. What Apple did matches up really well with what I asked for way back in January 2014. In iOS 9, Apple implemented both of my feature requests, Panels (which Apple calls Split View) and Popovers (which Apple calls Slideover). The nomenclature is different but the feature descriptions are almost identical. The iPad Pro element of the puzzle will no doubt show itself later in the year.

iPad Multitasking is hard to verbalise and describe well as its really an umbrella term for a class of different modes and behaviours. I’ve tested it and everything is pretty straightforward and intuitive to actually use, way easier than trying to write up an explanation of every interaction. I want to focus on one element of iPad multitasking in particular: picture in picture.

The thing about the iPad picture-in-picture implementation is that its actually better than how one would handle such a task on a Mac. On a Mac, trying to play a video in the corner whilst getting on with your work is difficult. Let’s take a video on YouTube playing in Safari. To play this in a corner of the screen on a Mac, you have to pull the window out into its own tab. Then, you have to manually drag the corners of the window to resize it and do your best to clip out all the unnecessary surrounding UI by hand. No doubt the window has a toolbar so you’ll probably have to do some awkward keyboard shortcut or hidden menu command to hide that as well.

Then you have to actually manage the window as you go on with your work. What do I mean by this? Well, with every other task you open you also have to make sure it doesn’t occlude the video playback window by dragging it out the way. The video can’t stay foremost so it’s actually really easy to lose the video amongst your other windows.

If you ever want to move the video from one corner to another, not only do you have to position the video on the screen, you also have to move all your other windows back over to the other side.

What if you want to make the video view a bit bigger? Drag the corners right? Nope. Dragging the corners of Safari just makes the viewport bigger showing the content on the webpage that surrounds the video. To actually make the video bigger, you have to zoom into the page and then readjust the window region to again fit the new video bounds. It’s a mess and the iPad implementation should embarrass the Mac team.

On the iPad, you play a video. With the video still playing, you press the Home button and the video scales down into a floating always-on-top chromeless panel. There’s a subtle drop shadow to indicate layering but nothing overbearing. You can make it bigger or smaller with a quick two-finger pinch and use one finger to flick it to any corner where it snaps into place. It’s so much simpler. There’s nothing to manage.

Just compare how many words I needed to describe the required interactions on the Mac and the iOS 9 iPad to achieve the same result. The drastically-simplified iPad implementation puts the Mac to shame. iOS 9 picture-in-picture is really great.

To seal the deal further, this behaviour also works with FaceTime calls. On iOS 9, with the same single press of the Home Button, you can now multitask videoconferencing and any other app. This is a massive boon for business customer but also has benefits normal people: my Mum always switches out to other apps whilst talking to me on a FaceTime call and having a persistent video stream of my face is a perfect reassurance that I am still ‘there’.

Picture-in-picture really is a fantastic feature with fantastic design. Its an incredible translation of a desktop metaphor to a tablet. Its so much simpler than a typical window manager with almost no compromise. Its just better. Why doesn’t the Mac work this way?

ReplayKit Screen Recording In iOS 9

11 June 2015

9to5Mac:

With ReplayKit, developers will be able to offer users the ability to screen record gameplay or other apps automatically or manually with a single tap. Users will then be able to share recorded content through an iOS share sheet directly to social networks and video sharing sites. Apple pauses all incoming notifications and anything that might ruin the gameplay video experience, and only users will have access to the recorded videos.

It’s cool technology. The lacklustre part of this jigsaw is the sharing element. Apple’s answer to this problem is to present the system share sheet and delegate all responsibility to third-parties. This feels like a cop out. Apple should integrate this feature with Game Center so users could upload their clips to Apple’s servers and friends could visit Game Center profiles and watch the highlights. The videos would ideally be tagged with the app they originate from, helping with marketing.

Mike Beasley Mocks Up iOS 9 App Store App With Sans Francisco Font

6 June 2015

Mike Beasley, 9to5Mac:

A second factor that helped influence these design changes (to a lesser degree) is the rumor that iOS 9 will change the system font to San Francisco, the typeface created for the Apple Watch. All of these mockups use that font.

On the Happy Hour podcast, I tried to make an analogy between swapping fonts in UI and introducing a new piece of modern furniture into a room. I say tried because I’m pretty sure I failed to make my point.

Let me try again in words. You can’t just replace all instances of Helvetica with Sans Francisco if you want the design to look good. The rest of the iOS UI is dependent on the Helvetica sizing and aesthetics. I think Beasley’s mockups show this well. On toolbars, the angular square letter forms of Sans Francisco just don’t mesh with the nicely rounded bar button icons like the Search magnifying glass. These icons are rounded (and use the same line width) as Helvetica, not Sans Francisco.

To do this properly, you would have to rework the bar button icons and other details (like navigation bar height).

Font choices are an integral part of the UI and the whole design is interconnected with the choice of font. For the Watch, Apple made different variants of Sans Fransisco for different parts of the interface. It’s designed with the mannerisms and characteristics of font in mind so it works well and looks good.

For iOS 9 and OS X 10.11, I hope Apple will apply the same level of care although I fear they haven’t. Redesigning UI and drawing new icons is a big project and the rumours haven’t pointed to this happening. One semi-solution might be to use the rounded Sans Fransisco typefaces. They will — naturally — be a better fit as a substitute for the (also rounded) Helvetica.

Unfortunately, the historical precedents suggest Apple will not apply this level of care. With Yosemite, Apple willingly switched out Lucida Grande for Helvetica on the Mac and I think the result is not ugly, but not beautiful. With just two days until the reveal, I can only hope that Apple has done a better job this time around.

Google Photos 'Free' Unlimited Photo Storage

29 May 2015

Google:

Google Photos gives you a single, private place to keep a lifetime of memories, and access them from any device. They’re automatically backed up and synced, so you can have peace of mind that your photos are safe, available across all your devices.

And when we say a lifetime of memories, we really mean it. With Google Photos, you can now backup and store unlimited, high-quality photos and videos, for free. We maintain the original resolution up to 16MP for photos, and 1080p high-definition for videos, and store compressed versions of the photos and videos in beautiful, print-quality resolution. For all the storage details, visit our help center.

When Google announced this and I tweeted how embarrassing this is for Apple’s iCloud pricing, I was barraged with feedback that it isn’t really free. If you are not paying for the product, you are the product. Yes, I know how Google works. I’m sure that somewhere in the terms and service Google will use data extracted from my photo library to serve ads.

It doesn’t matter. What matters is that for almost-free, Google is offering unlimited photo storage. Apple’s almost-free plan gives you a paltry 20 GB. When I say it’s embarrassing, I’m not saying Apple should store all your photos for free. I’m saying Apple should store all your photos for a lot cheaper than they do. With such high margins on hardware, maybe they could do it for truly free.

If not, they can certainly slash their iCloud prices to better compete. Google gives you 20 GB for free and offers a terabyte for $10/month. Apple offers 5 GB for free and a terabyte for $20 per month. The real kicker is when you realise Google offers that to people that have never bought anything else from Google. With Apple, they’ve already extracted hundreds of dollars of profit out of users in hardware sales and also charge comparatively exorbitant prices for cloud storage.

'Proactive' To Bring Google Now Features To iOS 9

28 May 2015

9to5Mac:

Below the search bar will sit a new user interface that automatically populates with content based around three key parts of iOS: Apps, Contacts, and Maps, effectively a considerably upgraded version of Siri’s existing “digital assistant” functionality. For example, if a user has a flight listed in her Calendar application and a boarding pass stored in Passbook, a bubble within the new Proactive screen will appear around flight time to provide quick access to the boarding pass. If a user has a calendar appointment coming up, a map view could appear with an estimated arrival time, directions, and a time to leave indicator based on traffic. Proactive will also be able to trigger push notifications to help the user avoid missing calendar events.

Beyond Calendar integration, the feature will be able to integrate with commonly used apps. For example, if an iPhone user typically opens the Facebook app when he wakes up around 9AM, a button to access Facebook will start to appear for the user around 9AM. If the user calls his mother every Tuesday at 5PM, a bubble to “Call Mom” could appear around that time every Tuesday. As this feature integrates deeply with a user’s contact list, it is likely that the Recent Contacts menu introduced to the top of the Multitasking pane in iOS 8 will be relocated to Proactive’s interface. Lastly, Proactive will be able to display restaurant suggestions and ratings around breakfast, lunch, and dinner times in Proactive, changing based on the user’s location.

First and foremost, Apple is moving the Spotlight search view to a new place in the UI (next to the first page of the home screen) which is a huge improvement for visibility compared its current hidden location, only revealed by a disconnected downward swipe at the screen. Having no visual indicators about the feature is rough but it was doubly-complicated by the discrepancy of swiping from the edge to open Notification Center and swiping from ‘not-the-edge’ to activate Spotlight. Explaining this to family was tough and they still forget about Spotlight a few weeks later. Returning Spotlight to its pre-iOS 7 position is the right move for user experience and its reassuring to see that Apple is not stubbornly against using some elements of the ‘old’ iOS era.

The wholly new Proactive features sound cool. From Gurman’s description, it sounds very similar to Google Now but focused on areas where Apple can collect good data. For example, the Proactive feature will notice user habits regarding when they open apps and show as appropriate. For instance, surfacing the Starbucks app in the mornings for users who take a daily commute.

The places where Android will continue to surpass Apple is in areas where it cannot collect the same level of data. Few users use iCloud Mail so crawling that for contextual information is out of the question unlike Google’s dominance with Gmail. In the future, I think Apple could potentially get around this problem by looking at your email through the Mail.app databases. It doesn’t necessarily have to host your email to get features like this but obviously this is a big undertaking that is out of scope for the initial release.

In fact, Gurman reports that Vorrath is hesitant to launch Proactive in any ‘full’ capacity — preferring an iterative sequence of releases. This seems like risk-averse damage limitation. Just like Siri and Maps, Proactive is one of those fuzzy-logic big-data features that is impossible to work flawlessly every time. By starting small, they contain the inevitably negative PR backlash.

When iOS 9 ships, it will be interesting to observe whether people actually latch on to these new Proactive features and use them. Old habits die hard and tapping on application icons is deeply ingrained into user behaviour. Android has had Google Now features for years now and — anecdotally — they are underused too.

Jeff Williams Confirms Native Watch SDK To Be Announced At WWDC

28 May 2015

Code Conference, via 9to5Mac:

Walt: Let’s talk about apps, the core apps (fitness, messaging), and there are third-party apps that vary widely. Walt asking if Apple Watch 3rd party apps will improve, says developers unsure what to do.

Williams: 4000 third party apps. I think the 3rd party apps are going to get much better when developers can write code natively to the watch. Apple will release a preview of the native Watch SDK at WWDC. Williams says SDK will give sensor access, Digital Crown access, more independent games. Will come out of preview in the fall, much like iOS updates.

Williams was more open than most Apple executives about future product plans. He dropped a pretty big tease about a car and said that Apple will preview a new Watch SDK at WWDC, with native apps launching in the fall. To be fair, on this second point, Apple had already kind of announced a native SDK for “later this year” but now we have clear timing.

At first, I was confused by Williams’ use of language. Developing against a native platform requires Apple to release an SDK. As with iOS and OS X, Apple releases betas of these next-generation SDKs just after the keynote for developers to get started. A ‘preview’ sounded different (like that it meant the real beta was coming later) but Williams then said that native apps would launch in fall (i.e. September). Due to the timing window, I think Williams was just using the wrong term or considered them interchangeable. To have apps ready for release by autumn, Apple has to start distributing proper beta SDKs at WWDC or very soon after, so that’s cool.

In light of this, WatchKit’s mere existence is amusing. A stopgap API that lasted (as a public development platform) for two months before its replacement and about six months until no one will want to use WatchKit extension-based apps ever again, when native Watch apps go live. This weird timeframe is largely caused by hardware shipping delays. If the Watch had debuted for Christmas 2014 as originally planned, the intermediary step of WatchKit would have made a lot more sense. It would have been nearly a year before the Apple Watch got any third-party integration if Apple had solely waited for native apps to be available.

I have speculated on Twitter that a native SDK may not necessarily be as open and freeing as the native SDK for iOS and Mac. There could still be limitations on drawing or interface elements. Williams’ comments appear to address these concerns. He says that apps will be able to access the Digital Crown, hardware sensors and ‘native’ games. The direct mention of games implies that custom drawing will be allowed. Although we’ll have to wait for Apple’s technical announcement at WWDC to be sure, it sounds as flexible as the development stack for iPhones which is great.

ASA Continues Attack On YouTubers Over Misleading Sponsorships

27 May 2015

The Guardian:

“it wasn’t until a viewer had selected and opened the video that text, embedded in the video, referred to “Procter & Gamble,” the ASA said. “We considered that viewers should have been aware of the commercial nature of the content prior to engagement.”

The ASA also said that the use of the phrases “sponsored by” and “brought to you by” did not make it clear that the videos were marketing communications.

UK advertising regulation on YouTube videos is a fascinating area. It all blew up in November, when the ASA ruled that a Oreo sponsorship series did not make it clear that there was paid placement. Since then, YouTubers have generally followed the suggestions laid out in the original complaint by adding ‘Sponsored by’ disclosures to videos.

However, in this latest case on the Procter & Gamble makeup videos, the ASA indicates that such disclosures are not sufficient. In fact, the ASA wants commercial affiliations to be made clear before the video is even played.

Jony Ive Becomes 'Chief Design Officer' At Apple, Howarth And Dye Lead Industrial And UI Design

26 May 2015

Tim Cook, via 9to5Mac:

Design is one of the most important ways we communicate with our customers, and our reputation for world-class design differentiates Apple from every other company in the world. As Chief Design Officer, Jony will remain responsible for all of our design, focusing entirely on current design projects, new ideas and future initiatives. On July 1, he will hand off his day-to-day managerial responsibilities of ID and UI to Richard Howarth, our new vice president of Industrial Design, and Alan Dye, our new vice president of User Interface Design.

The Telegraph describes Ive’s change of position as a ‘promotion’. Ive was already the most influential person at Apple aside from Cook and his personality doesn’t exactly prescribe the need for a better title: Ive doesn’t really seem like a guy wanting an ego boost.

Dropping the PR spin, this is a reorganisation so Ive can back off from daily operations and do whatever he wants, “more travelling” included. Whenever he has a splash of inspiration or a new direction to explore, he can go back to the Cupertino studio. I think the writing is on the wall that he will retire in a few years.

I don’t know much about Dye and Howarth but they can’t be idiots. Dye came across well in the recent Apple Watch interviews and if Apple Watch UI is driven by Dye, then relinquishing Ive from that role a bit can only be a good thing for Apple software. Also note that Howarth’s appointment continues to put Brits at the top of Apple’s industrial design department.

Reflecting On University

24 May 2015

Unless something crazy happens, I did my last ever academic exam on Wednesday wrapping up three years of an Economics degree. I’m posting these thoughts with some immediacy so I can’t get accused of my opinion being poisoned by my final grade (which I find out in mid June).

Before attending York, I was extremely pro-university attendance. It seemed like the unequivocal best thing to do for anyone who was competent enough to pass. With my brother now thinking about his further education plans, I’ve been forced to honestly review my own experiences. And being honest, I can’t effusively endorse it like I could.

Part of the problem is me. I chose Economics because I was good at it at A Level and in 2012 I was looking at a career in finance. Nowadays, I get endless questions why I didn’t do a degree in Computer Science and in hindsight that’s probably the better choice. At the time, though, it didn’t make any sense. Although my apps were doing okay, it wasn’t anywhere near a sustainable level. It wasn’t until February 2013, when Cloudier launched, that my iOS contracting took off in a meaningful way. By then, it was too late to change — I was already half way into first year of my Economics course. As my iOS work continued to grow, my future thinking diverged away from finance.

The problem is university, at least in the UK, is inflexible. In the US, there’s a ‘major-minor’ system which lets you dabble and move between things. Although I still liked Economics as a subject, some of my personal motivation dissipated. I was too ignorant to realise I couldn’t predict the future and that my interests were not fixed in stone. I don’t think my case is uncommon. Doing one subject intensively just leads you to want to do other stuff with the rest of your time. When those hobbies become viable career prospects, its only natural. With the UK system, if you have a change of heart, you can either drop-out completely or start over with a new degree from year 1.

University would better serve students if there was more choice and variety in what you could learn. One subject only feels old-fashioned. I would also feel happier about the whole experience if first year didn’t exist. Calling degrees a ‘three year course’ is false advertising. First year is largely a repeat of content learnt at A Level taught by lecturers who would much rather not be teaching remedial economics. Many people can pass year one without doing any revision (and, in many cases, without turning up to lectures) because it is so trivial. This is exemplified by the fact first year marks don’t count at all towards your final degree classification. It really feels like filler content and a waste of time.

Considering the amount of money students pay to attend, you would at least think that the university would get the exams right. Wrong. Over half of the twenty exams I took over the last three years had errors. Not just typos or small printing errors, but serious structural problems that often made questions impossible to answer. A classic example was when a maths test referred to a printed table of data that simply wasn’t printed on the paper.

Aside from errors, the entire uni exams situation is a mess. Assessment is supposedly standardised by university policy. In reality, modules come down to the whims of the lecturer. Marking is opaque and seemingly arbitrary. Some modules offer past papers with answers, some modules offer past papers without answers, other’s offer no past papers at all.

An open secret is that lecturers can’t be bothered to make new questions every year so they reuse (almost) the same papers every year. This laziness is probably why many don’t like releasing last year’s questions. As such, passing exams becomes more about optimising the system and learning answers by rote than actually understanding the subject. Much to my disappointment, remembering stuff is still the number one skill people need to succeed at school. If you want to actually learn stuff, you don’t need to go to university.

A Level and GCSE exams are regulated by central exam boards. University exams should have similar control. It’s hard to explain just how ridiculous some of the answers expected of students are. Rather than set thought-provoking novel exam questions, I am convinced lecturers opt instead to make obtuse mark schemes that organically filter students performance into nice statistical bell curves. As modules have no outside regulation, it also makes it impossible to compare degrees across universities. A degree in two different universities are not equivalent. There is no way to truthfully compare their value. If I ever get into position where I am hiring workers, I won’t filter on grade classification as so many companies do. It’s not a fair assessment of ability.

Exam regulation is part of overall levels of teaching standards. Not all, not the majority, but a good portion of lecturers I had were just not good. There are bad and good teachers at school too but the spectrum is way more exaggerated at university. Lecturers primary role is research, not lecturing. I don’t know what teaching training they have to pass to run a module but it doesn’t work. Some lecturers are just straight up awful at actually teaching anything. Lecturers offer office hours every week where you go and talk to them about stuff. The best offices hours are from the best lecturers (naturally) so you are still kinda screwed for the modules with bad staff.

The worst bit is seminars. My seminars across the three years were ran almost entirely by PHD students. If lecturers do get teaching training, it’s abundantly clear that PHD students get none. Seminars could be great (small group lessons) if they were ran by competent people. Instead, you do a problem set in preparation for the seminar and then the seminar tutor goes through the answers. The few seminars that were taught by lecturers were useful as they could explain concepts and solution techniques. In general, I went to seminars because I had to — missing seminars leads to intervention — not because I deemed them that helpful.

Still, is university all bad? No. It’s okay. For one thing, if you want to start a decent career in Britain you need a degree. Whether I think university teaches anything of value is irrelevant. Beyond that, although teaching is patchy, there is benefit in an instructor telling you what to read and what to learn. Lecture notes greatly reduce the technical nature of source material so that it is actually understandable. After all, if university offered me nothing at all, I would have dropped out.

If governments regulated university in the same vein as compulsory education, standards would be forced to be way higher. Ultimately, I think a good way to sum up my feelings is that my favourite parts of uni have nothing to do with the actual institution. The social elements are great and being around nice people every day is what keeps you going. I had gone in thinking the central teaching parts would be of higher quality. I leave underwhelmed on that front.

Please be aware that all of the above is based on the experiences of one person at one university taking one degree. Maybe other places are different and I was unlucky. Maybe you get what you put in and put more into other things, hence liking that more and liking uni less. Anecdotally, everything seems very similar.

Apple Watch's Annoyingly Aggressive Screen Sleep Behaviour

20 May 2015

Zac Hall:

I do wish there was a method (or even setting) that would allow the watch face to stay active for longer periods of time. Even the stop watch goes to sleep after 20 or so seconds without being engaged. Sometimes it’s handy if not necessary to be able to stare at your watch and see the time pass. To do this with Apple Watch, you need to tap the display or rotate the crown every 10 seconds or so.

iOS apps are allowed to disable ‘idle timers’ contextually, so that the screen does not Auto Lock at an inconvenient time. Video apps do this, for example, so you don’t have to to keep tapping the screen to stop the iPhone from dimming whilst watching a movie.

Apple Watch apps need to have a similar concept. Obviously third-party apps are gimped in ways far beyond the screen turning off too early but this request applies to Apple’s apps too. Timer is the prime example.

Start a ten minute timer. Check the timer. Wait a couple of minutes. Glance back at the timer. The Watch is back on the Home Screen. Get angry.

The Timer app should be able to tell the system that a critical activity is happening so the Watch knows not to sleep. On wrist raise, I should still be in the Timer app — not kicked back to the clock face. It’s a huge inconvenience for any important activity that lasts more than 17 seconds, which is how long the Apple Watch will wait for user interaction events before sleeping. Naturally, if a timer isn’t currently running then the Timer app doesn’t need to assert such privileges.

The Remote, Maps and Workout apps already work like this. They have elevated system privileges that take precedence over the clock face whilst in use. Maps and Workout are particularly good citizens, only overriding the clock face when navigation or a workout is in progress. Apple needs to make this behaviour universal across the OS, where contextually appropriate of course.

« Older