According to a new note from reliable Apple analyst Ming-Chi Kuo at KGI Securities and corroborated by our sources, Apple plans to introduce a dramatically overhauled MacBook Pro later this year. Kuo says the device will have a new “thinner and lighter” design with design cues taken from the 12-inch MacBook, as well as Touch ID support and a new OLED display touch bar above the keyboard.
The OLED display touch bar will replace the physical function keys along the top of the keyboard, while the design will adopt new metal injection molded hinges as reported earlier this year.
It’s certainly a novel direction to take the MacBook line, adding dynamism to a keyboard layout that has remained the same for many years. A lot of Windows laptops includes a row of illuminated capacitive buttons but Apple is going further. It’s essentially replacing the function keys with a (really skinny) OLED touchscreen that can display any arbitrary UI. I think Apple chose OLED for the contrast levels, I can envision how the deep blacks of the screen look great alongside the piano black keyboard keys.
It’s not obvious to me how Apple is going to use this secondary display. Because it isn’t the primary display and because it can’t be a mandatory requirement to use OS X, as Apple will still be selling millions of Macs without a OLED accessory bar, I fear it might be an underused gimmick.
As Nintendo fans will know with the Wii U, making interfaces that interact between multiple screens is tough. What happens is that both displays battle for the user’s attention simultaneously but it turns out that ultimately one screen naturally monopolises the focus. In the case of MacBook, the primary canvas is the 15 inch Retina display. Demanding the laptop user to look down constantly is laborious and annoying. The natural laziness of people means most do not want to be nodding dogs; there’s a reason why touch-typing is so popular. Aside from physical strain, juggling multiple displays is simply a lot of information to take in. Creating UI conventions to signal when users need to check their dashboard display is incredibly hard. Putting critical information on the secondary display is a risk if the user simply forgets to check it.
The other end of the spectrum, then, is to keep the OLED screen content pretty much static. Limiting dynamism simplifies the mental load and enforces clear patterns of expectation about when the user is supposed to interact with the accessory display. Perhaps preferences allow for some customisation of what can appear there — the crucial point is that the buttons wouldn’t change passively whilst using OS X.
Although that would remove the problems I enumerated, it is a functionality tradeoff. What I’m describing in the second case is not that far removed from what exists already, i.e. a fixed set of function keys. In fact, it would be a regression in this case: the tactility of actual physical buttons would have been sacrificed. This is why I’m in a quandary. I would be concerned if Apple incorporated a significant new hardware change without a compelling use case to justify its existence.
A lot of people could argue that Force Touch was exactly that, a Mac hardware feature that was/is a dud. The impact with this rumour has more potential to be destructive. Force Touch on OS X can simply be ignored with no downside. An OLED button bar that replaces function keys cannot be ignored, it will have to be used by every new MacBook Pro owner. If its bad or mediocre, every customer will be impacted.
Google had its I/O conference this week, hosting its presentation of its latest announcements and outlook on what can best be described as a pop-star concert stage. I think the venue was a mistake but the presentation itself was markedly better than previous years. Clocking in it at two hours, the Google IO keynote is finally down to an acceptable length. Just a couple of years ago, they would run two 3 hour presentations on consecutive days.
One thing they unveiled was a FaceTime competitor called Duo. Specifically, there’s an element which struck a chord with me. When videocalling someone else, the recipient sees a live stream of the caller’s video as it rings. One side of the video call has already begun at the moment of the phone ringing. The other person can then pick up the call to start the two-way video, seamlessly transitioning into the conversation as the video of the person on the other end is already live.
It’s a fantastic streamlining of FaceTime. They also emphasised the instantaneous nature of the protocol allowing the two participants to community immediately after the call is confirmed. FaceTime’s usage model is a lot colder. One person asks to call someone else, the recipient sees the person’s name and a static image. When the call is answered, the video streams attempt to initiate a connection, which involves staring at a Connecting indicator for a few seconds, before finally succeeding to allow the two people can see and talk to each other.
The current FaceTime flow is as bad as a traditional phone call, which is basically what FaceTime is (in the same way iMessage is a 1:1 reproduction of SMS transmitted over the Internet). With Duo / Knock Knock, the call has effectively already begun as soon as the phone screen lights up on the receiving end.
Google showed how the caller could signify intent during the time waiting for the other person to respond. The user on the receiving end can pick up context from the Knock Knock video stream, such as where the person is, what they are doing or who they are with. Google showed potential with examples of people holding up movie tickets, engagement rings or simple facial expressions like happiness or sadness. (That being said, the product video — embedded above — did not do a good job of expressing the possibilities tastefully; it is too cheesy and felt too forced).
Aside from the speed and practical advantages, it’s also just damn cool to send your face to someone else. If the feature turns out to be gimmick, it encourages more people to do video calls in general, even if its just the novelty of how it works. I think it gives a meaningful benefit to picking the video option over audio, though. Even if they decline, you can imply something in those couple of seconds that would never happened otherwise. It’s almost like a transient Snapchat selfie with the opportunity to commit to a full conversation.
It’s a user experience thing that I hope Apple adopts. There are obvious knee-jerk fears of the dangers of letting people put live video onto someone else’s screen without explicit consent. I think these issues are easily mitigated by decent policy design, such as a (default) preference to only ‘enable Knock Knock for people in my contacts’. Careful attention will have to be given to the interface for callers too, especially early on, to explain what is happening — make it plain that the other person can see what you are doing right now even though you can’t see them yet. These are solvable social and technological problems and the benefits are huge, in my view.
Slight confession: I meant to write this post the same day as the event. I ended up being lazy and didn’t get to it until today. I’m glad I waited though, as it let me focus on what I was actually interested in. Almost subconsciously, my mind has concentrated on a couple of specific things.
Out of Google’s entire keynote, I can easily recall just two announcements: the Instant Apps demos and Knock Knock. Everything else is a vague blur or forgotten. Instant Apps is a technical quagmire with a lot of questions about implementation and its utility, so I’m holding off on judgements until its more set in stone … although the premise is intriguing. Duo is more concrete, complete with product videos, and made me genuinely excited. Alas, neither of these announcements have solid release dates, unfortunately. I can’t wait to check out Duo and Knock Knock sometime “later”.
iTunes is already an amalgamation of many different things. With iTunes 12.4, Apple has reinforced its piecemeal design further by reintroducing parts of iTunes 11 without properly considering all the edge-cases and window states. 12.4 adds a sidebar that replaces a popover UI control to manage views like Albums, Artists and Genres.
The sidebar is a resurrection from the days of iTunes 11. I like the sidebar better than the transient popover (OS X has enough screen real estate to allow such affordances) but it hasn’t been thought through. It’s shoddy and incomplete. There are distinct sidebar sources for Albums and Compilations and yet selecting Albums still shows Compilations in the detail view when you scroll down. Some media types do not hide the sidebar but have no sidebar items to choose between (Podcasts). Many media views have no sidebar at all, leading to jarring transitions between tabs, including all of Apple Music.
Again harkening back to iTunes 11, the Media Picker is now once again presented as a menu dropdown. In earlier versions of iTunes 12, the switcher for Music, Movies, TV Shows and such was presented as a mini tab bar, with a More button to reveal more. I actually prefer the new dropdown for overall usability as it features text labels alongside the glyphs. You can also edit the list to show just the items relevant to your library. However, it is now two clicks to change views which is a regression in efficiency. This is obviously frustrating if you context switch a lot but I don’t mind it — I rarely use iTunes for anything but music.
Most infuriatingly, iTunes has now made Compilations a second-class citizen in the library interface. There is now mandatory filtering, separating normal albums from compilations. To see this for yourself, click on Albums in the sidebar and scroll down. Previously, all of the albums and compilations would appear in one grid. With iTunes 12.4, they are now sectioned independently. This is so frustrating as much of my library consists of compilations and no longer participate in the normal album ordering.
As far as I can tell, there is no way to revert to the previous layout whilst maintaining a sort order by artist name. If you want to coalesce them and don’t mind sacrificing Artist ordering, change the View Options to sort by title.
In summary, iTunes continues to suck. It’s held back by its ageing codebase and the necessity for it to be a cross-platform program. A good version of the desktop app probably isn’t going to happen until Apple splits all the cogent functions into separate apps, like on iOS with dedicated apps for Music, Movies and more. I look forward to the Music.app revamp in iOS 10 to see an unconstrained representation of Apple’s vision for music software.
Late last month, Apple brought seven leading podcast professionals to the company’s campus in Cupertino, Calif., to air their case to a room full of employees, according to two people who were there. The people would speak only on the condition of anonymity because they had signed nondisclosure agreements. The company made no promises, the people said, but several pressing issues for podcasters were discussed in frank terms.
After the presentations concluded, Eddy Cue, the executive at Apple who oversees software and services, arrived for a closed session with the company’s employees, according to the attendees.
I don’t care who the focus group was, Apple isn’t going to give out personal analytics and other sensitive data to third-parties no matter the circumstances (see: ongoing frictions int the FBI and governments). Apple gives App Analytics reports to developers about the usage of their apps; the information provided is anonymised and vague. Even then, iOS users can still opt-out of supplying the information that is barely useful and no where close to personably identifiable. Its own advertising division, iAd, has been scaled back for similar reasons. Maybe Apple will start publishing data like ‘average playback position’ for episodes or total number of plays (a more accurate metric than raw download counts). I don’t think there’s any need to worry about invasion of privacy.
Business implications are different. Podcasting has remained an independent affair, surprisingly. Being realistic, Apple is now (reportedly) giving its podcast directory attention because it is being commercialised. Phenomena like the success of Serial have certainly drawn big business into the fray. This would be my best guess as to why Apple has taken an interest after years of maintaining the status quo with its podcast directory.
There’s a possibility Apple’s proactive involvement will be damaging. If I’m right about Apple’s motivation (influx of large corporations), then there’s a good chance independents will get shafted in whatever policies Apple implements. There’s also a chance that it’s a good thing. It’s not out of the question that Apple will add a storefront, so people can subscribe to shows for a monthly rate. Putting to one side the inevitable 30% cut, an easily-accessible subscription model Apple service could open up a new revenue stream for podcasts. More simply, Apple could also improve its podcast marketing and featured content efforts, potentially improving discoverability for good — but low listenership — shows.
You can complain about the App Store for an hour, but at the end of the day it was a great thing for a lot of people. It created good livings for many people (and great livings for a few) who never would have done so without its existence. There’s risks that Apple makes the podcast industry proprietary and closed but they have the same right as anyone else to do what they want. There will be winners and there will be losers.
I think its way too early to presume Apple’s involvement would be negative. Disruptive, sure, but not destructive. Again, consider the App Store. For all its flaws, you’d be hard-pressed to say it was a bad thing overall.
new version of Apple Music, which Apple recently announced has 13 million users so far, focuses on a redesigned user-interface, a few new functions, and reorganization as well as simplification of existing features. The new user-interface ditches the current colorful and translucent look in favor of a simpler design that emphasizes black and white backgrounds and text. For instance, the user interface in the albums view will no longer change in appearance based on the color of a particular album’s art. While the new interface will eschew color in the user-interface, album artwork will become “huge” and a larger part of the interface in order to avoid a dull black and white look, according to people who have seen the updated Apple Music service.
The current Music app has a lot of faults and it’s not limited to Apple Music customers — as iTunes download users also must use the same app. I argued when Apple Music was new that Apple should have released it as a separate app, completely separate from the normal music experience, but there’s no indication of that happening.
With that idea put aside, I want to focus on one particular gripe I have with the current Music app: the selector widget that lets you switch between library views, to switch between Albums, Artists, etcetera. This is my primary method of navigation when looking at my music and choosing something to play.
And yet, with the iOS 8.4 / iOS 9 redesign, it was completely shafted to a secondary action. You have to press on it once to bring up a modal that lets you press another button to do an action. It doesn’t stay permanently on the screen, the dropdown button scrolls out of view so it’s inaccessible unless you are staring at the top of the list. This means you have to scroll all the way back to the top, thereby losing your contextual place in the list, to swap views.
All the previous iterations of iOS Music have put these context switches as tab buttons. The tab bar approach was far superior. Switching between Albums, Songs, Genres et all was one tap. Moreover, the tap targets were in the most accessible location — at the bottom of the screen — allowing smooth one-handed operation. Tabs are also common design language (you already know how to use it) in iOS whereas the modal-action-sheet dropdown-switcher-thingy is an alien concoction that is not intuitive.
The currently selected view is clearly represented by the currently-highlighted tab. When you scrolled the list, tabs don’t disappear either. Tabbing between views would remember the exact scroll position on each page independently. The tabs were even customisable and re-arrangeable, allowing some affordance to individual user’s unique needs.
To me, those are the primary actions. I switch between Album view and Artist view every single day. Yet, right now they are hidden away behind a modal menu behind a transient button (in the sense it disappears when you scroll). I have no good things to say about it, it’s an unequivocal regression compared to the previous tabbed design. I would love for this functionality to return to a permanent toolbar position. It would make the app so much better.
We’re developing a new solution that will allow videos to earn revenue while a Content ID claim is being disputed. Here’s how it will work: when both a creator and someone making a claim choose to monetize a video, we will continue to run ads on that video and hold the resulting revenue separately. Once the Content ID claim or dispute is resolved, we’ll pay out that revenue to the appropriate party.
A nice enhancement of the Content ID system in favour of content creators. Rather than monetisation just not happening for videos whilst in dispute, the ads stay but any revenue is kept in quarantine until the dispute is resolved on a particular video. When the ruling on video rights is decided, the money is distributed to the appropriate party. Although the payment is delayed, it’s way better than before when false claims would simply render channels unprofitable.
Of course, this means nothing if the resolution process is unfair in its judgement and the content creator forfeits the revenue when its video was legitimate and valid. Hopefully, that doesn’t happen. I haven’t heard from many people who have had bad claims ruled against them so I think it’s rare for that to happen.
Next I’d like to talk about services, which was our second largest revenue generating category during the quarter. Setting aside the amount we received from a patent settlement in the December quarter, the March quarter services revenue was our highest ever.
Services revenue jumped 20% to 6 billion dollars. App Store revenue was up 35% to beat last quarter’s all-time record, and Apple Music continues to grow in popularity with over 13 million paying customers today.
We feel really great about the early success of Apple’s first subscription business and our music revenue has now hit an inflection point after many quarters of decline.
Apple is pushing the services category as a burgeoning part of its business with current and future growth potential. Focusing on and expanding services is a fascinating proposition as generally I’ve considered Apple as a company that sells hardware and bundles services for free. To me, these comments on the earnings call indicate Cook wants to develop services further in a serious way.
If Cook is being sincere, and not merely paying nice lip service in the middle of a hardware revenue slump, then it has huge implications on product direction. I’m wary that they might tread off the golden path, especially as internet services isn’t exactly something the company has shown to be comfortable with executing, but there are potential positive repercussions as well.
There’s a chance Apple dabbles in low-margin products as a result, for example. Selling customers high margin hardware and expecting associated high margin services purchases as well is far less compelling than a Kindle-esque strategy with cheaper hardware dependent upon ecosystem purchases. Arguably, Apple TV is destined to be exactly that. A cheap box with an assumed reliance on Apple subscription services.
On the negative side, I do think this means free iCloud storage will continue to be crippled for the foreseeable future with Apple encouraging people onto paid tiers. There’s a chance they bump the free quota slightly (currently 5 GB) but more likely is boost upsell opportunities for the paid plans. For example, I would not be surprised if Apple doubled the $0.99 per month tier to a 100 GB limit soon, up from 50 GB today. This is similar to how Apple pushed hardware ASP higher by keeping 16 GB model around and instead bumping the mid-tier to 64 GB.
Cook referred to Apple Music as Apple’s ‘first’ subscription service. You don’t need to be a seer to expect that more are coming. The prime candidate is something I dubiously dub ‘Apple Video’, the long-rumoured skinny bundle cable streaming service. Although Apple Video makes most sense in the context of Apple TV, it will certainly be available on every iOS device and Mac, maybe Android even. A video service can also be priced higher than music streaming, I would guess around the $30 price point, which is good news on the revenue growth front.
In the earnings call, it was noted that a big part of Services revenue growth is being driven by the App Store. This is entering dangerous territory for me, where Apple’s motivations are warped too far towards money rather than doing what is best for its developer community, in which I participate. Rumours of paid search are suddenly far more difficult to dismiss.
There are ways that Apple could boost revenue from the App Store that also simultaneously benefit developers and customers. If Apple can increase App Store monetisation for developers, it will see higher returns through the 30% cut. Specifically, Apple would need to increase income generation on monetisation platforms that it controls, like In-App Purchase or the initial upfront price of paid apps.
Avenues like ads can make developers rich but Apple gets nothing. It’s a leak of money that channels through their platform but they don’t get a slice of. Apple is backing out of the iAd business completely. Most of the richest developers on the App Store today make a large proportion of their income via advertising. Apple gets nothing.
If they could foster alternative monetisation strategies that go through their first-party payment systems and make developers switch away from advertising, Apple would be making money where they were previously making nothing. Successfully executing this would reduce the number of ads in apps whilst making Apple more money. That’s a win for customers, developers and Apple. Easy to say, way harder to actually find such monetisation avenues and do it.
Looking at the harsh realities, though, it is extremely difficult to see how any of this stuff adds meaningful revenue to Apple’s balance sheet. If the company wants to empower its future growth through its services businesses, it needs to offset billions of dollars of declining hardware revenue. Writing off billion dollar businesses as small sounds so flippant but for Apple it is true.
Apple Music has 13 million customers paying $10 a month right now. That’s $1.5 billion a year. Apple’s total yearly revenue for 2015 was approximately $230 billion. Apple Music is teensy-tiny on revenue terms. (No idea on profit, I’d guess it has around 30% margins). By the way, there is also a cannibalisation factor to account for here. People buying Apple Music will (logically) cut spending on iTunes downloads.
Perhaps, the way Apple drives non-negligible revenue growth is by a combination of many different things. For the last five years, Apple is the iPhone. In the future, maybe Apple is more of an ensemble affair with many different streams contributing to its overall numbers. There won’t be a single service that rivals its hardware income but a grouping of App Store income, Apple Music, (rumoured) Apple Video, iCloud tiers and whatever else could.
There’s nothing wrong with diversification per se but it is a change to how the company used to operate. Around the launch of iPad mini, there’s an obvious breakpoint in company strategy where they expanded from a couple of flagships to a myriad of variants in each category. The days of Apple’s products ‘all fitting on one table’ are long gone. If services do grow significantly, the metaphor really breaks down as its products would be intangibles.
For the past few months, I’ve been collaborating with Sam Beckett (author of a fantastic Control Center concept we linked to a while back) to visualize my iOS 10 wishes with a professional concept video and static mockups. Sam and I discussed my ideas for a couple of months, and he was able to visualize2 what I would like to have in iOS 10 – both for the iPhone and iPad – with a style and attention to detail I’m extremely happy with.
Below, you’ll find a collection of my iOS 10 wishes, organized in tentpole features (the ones also shown in the video) plus additional sub-sections. Some of these wishes have been on my list for years; others are a consequence of the features Apple shipped with iOS 9.
You should read the list and watch the video before going any further. I could post this with a comment of agreement and say everything Viticci suggests in the MacStories concept video is useful and Apple should add it all to iOS. That’s a boring (and obvious) thing to do so I’ll spare the words. As iOS is an endless cycle of feature releases, ultimately almost everything in the article will probably come to light eventually.
This is one of the best iOS feature concepts I’ve watched, ever. It offers realistic ideas about how iOS could and can improve with interface designs that are nice to look at and fit well into the existing metaphors of the system.
The Control Center customisability is great, utilising the same jiggle indicator as the Home Screen to show mutability. Expanding the Messages app to handle more rich media types is an obvious future direction and the video does the idea justice with some cute UI work. Changes to the iCloud Drive app and Document Picker are well-warranted and the proposed layout is a great balance of Finder-esque power with overall iOS simplicity.
Their choice of side-by-side multitasking app switcher redesign is also nicely considered with a higher priority given to recent apps and an affordance for user-defined stickied favourite apps. I don’t like how they have chosen to activate a drag-and-drop mode, by exposing a drag handle alongside the Cut-Copy-Paste menu, but I don’t have a better answer to hand so it’s hard to genuinely critique it.
I love the subtle bounce animation the video uses for popovers that isn’t even mentioned explicitly; a nice quick effect to draw attention to the modal view. This is what iOS 6 had and what iOS 7 and later needs; bits of delightful whimsy that don’t get in the way of what you were actually trying to accomplish.
All that being said, the realities of making a video mockup versus actually creating the feature as an Apple engineer are different things. When you are making a video, each feature is about the same amount of work: think up an idea, make some assets and glyphs, incorporate that into a series of moving images.
I’m not claiming it’s easy to do, I couldn’t make these mockups, I’m saying each item can be taken of equal priority and equal importance. Implementing this stuff into a working, shipping, version of iOS is very different. I’m certain a lot of this stuff as is would have usability issues when actually made, there are lot of edge-case issues that pop up in development that don’t come through static screenshots and concept videos.
Different features have wildly different requirements about what is involved. Making rich Message previews for URLs and Notes is probably easier to do than change up the Apple Music machine learning algorithms to be more contextually relevant. Similarly, making Message previews for links is easier than making a framework for all third parties to integrate into message bubbles and show custom content and buttons.
Dark Theme is a great example of a feature that is easy to visualise in a couple of Photoshopped screenshots (MacStories’ video depicts a dark version of Messages, Calendar and Music) but actually doing it well at an OS level involves many more challenges than simply turning a white background black. There needs to be a lot of planning and thought for how the settings work, whether there are automatic options for sunrise/sunset or brightness, Control Center overrides, handling timezone changes, etcetera etcetera. Simply adding ‘night’ themes to every system app would be a huge undertaking for Apple’s design and engineering departments.
The varying workloads required is what ultimately determines what Apple tackles and when from the smorgasbord of potential thought-up features. Looking realistically at the major things on the list, my guess is iOS 10 will probably include a more flexible Control Center, a better multitasking app switcher, read receipts per Messages conversation, and drag-and-drop between side-by-side apps as an outside bet. Everything else is probably out of scope for this year.
This isn’t a criticism of Viticci’s work, he’s not intimating this is simple stuff, but many people watch these videos and believe as much, with a sentiment like ‘this guy on YouTube did it, why doesn’t Apple?’. The same applies to feature request written posts of course, but there’s something about the visceral quality of video that reinforces that feeling more than words on a page.
Apple today updated MacBook with the latest Intel processors, improved graphics performance, faster flash storage and an additional hour of battery life, making the thinnest and lightest Mac better than ever. Featuring an all-metal unibody enclosure, MacBook is now available in four aluminum finishes — gold, silver, space gray, and for the first time on a Mac, a gorgeous rose gold. With a stunning 12-inch Retina display, highly responsive full-size keyboard, Force Touch trackpad, versatile USB-C port and all-day battery life in a design that is just 2 pounds and 13.1 mm thin, MacBook is the future of the notebook.
Unsurprising given that the Retina MacBook design was brand new just a year ago, Apple has kicked off its Mac updates in 2016 with a minor refresh to its flagship laptop. Better CPU, better graphics, longer battery life, Rose Gold. Somewhat disappointingly, the USB-C is not Thunderbolt 3.
The pricing didn’t change, so it still starts at $1299 for the base model. This means the MacBook Air continues to be sold for another generation. When the MacBook can get to three digit prices, I don’t see a reason for the Air to exist. The Air line is running out of breath (excuse the pun) and I think the upcoming new ultra-thin 13 inch laptops will be branded as Pro, not Air. (By the way, the new Pro models are probably going to replace all their ports with a few Thunderbolt 3 / USB-C sockets).
Interestingly, the 13 inch Air did get a spec-bump today on RAM (8 GB across the board) so it isn’t abandoned. I addressed this flippantly on Twitter but it disguised my actual point. What I was trying to convey was that the RAM improvement should not be seen as a sign that the Air branding is alive, more that it’s on life support receiving maintenance updates to sustain it. I do not believe the Air laptops will be sold in a year or two.
Scammers being scammers, they realized Amazon was lying very early on. Amazon couldn’t tell what pages were read. They only knew the last place you were at in the book. And that’s what they were paying authors, the last place that the reader synced in the book.
So, a KU borrow on a device that didn’t sync until after the book was read and the reader flipped back to the front to check out what else you’d written? Yeah, no pages read.
But likewise, a reader who clicked a link on Page 1 offering them the opportunity to win a Kindle Fire HDX 8.9 and a $100 Amazon Gift Card….which then sent them to the back of a 3000 page book? Yep, you guessed it. They got paid for 3000 unread pages. (And no, there was no winner for those contests that anyone knows off.)
Anything you make open to the public is open to abuse. The scam itself is self-explanatory and straightforward but fixing the loopholes is a lot harder than it sounds. It’s impractical for Amazon to proof-read every single page of every single book that gets submitted to the Kindle platform. What’s particularly frustrating for legitimate publishers in this case is that the payout pool is fixed, proportionate to the number of Kindle Unlimited subscribers. This means the bad actors are directly stealing revenue from the pot that could have been distributed to real writers. If less people were ‘reading’ 3000 pages of computer-generated rubbish, all authors would make more money.
Following Apple Music, I’ve heard some people say Apple should offer a subscription Video and Apps service, paying developers based on app usage metrics. For the former case, I fully expect that to happen: an Apple streaming video service is inevitable.
For the latter, I am doubtful we will ever see something like Kindle Unlimited for apps run by Apple. It would be a massive attack vector for people looking to make a quick buck by abusing the platform. There’s a lot of scammy stuff on the App Store today without developers having monetary incentives to keep readers tapping around in their app to clock up their usage quotas. With a few seconds thought I can imagine bad developers adding arbitrary wait times or additional steps into their UI, just to grab a few extra cents from each customer.
The reason Apple Music doesn’t have this problem is because the music submitted to the service is heavily vetted and getting content in the library is pretty arduous, requiring songwriters to publish through a record label of some kind. The same would be true for the hypothetical forthcoming Apple Video service. Nothing’s perfect of course — Spotify was famously abused by a band who asked fans to listen to hours of silent audio tracks on repeat to earn pay-per-play revenue.
Apple Inc. has constructed a secret team to explore changes to the App Store, including a new strategy for charging developers to have their apps more prominently displayed, according to people familiar with the plans.
Among the ideas being pursued, Apple is considering paid search, a Google-like model in which companies would pay to have their app shown at the top of search results based on what a customer is seeking. For instance, a game developer could pay to have its program shown when somebody looks for “football game,” “word puzzle” or “blackjack.”
Adding paid search to the App Store could be extremely damaging to the (already fragile) ecosystem. If you asked people to name the main problems with the App Store today, discoverability of good apps would rank near the top of the list.
I think any change to the App Store search algorithms that adds variables into the mix that can be distorted by third-party company marketing budgets is undesirable. By its very nature, paid search skews results towards those institutions with the biggest wallets. It does not seem like that it would improve discoverability for me, an indie developer who cannot afford to pay for these expenses.
Paid search has a spiralling factor too, reinforcing the position of the incumbents. The theory goes that those apps already top of the App Store grossing charts (think crappy virtual currency games) will have the most money to spend on paid App Store advertising, strengthening their dominance further. The small guys selling apps for a few dollars a piece don’t stand a chance — there’s no money available for more marketing.
Google is considered to be the best search engine in the world and their results are littered with paid ad placements, so commercialising search does not spell doom by default. I will wait to see Apple’s implementation before getting seriously upset. That being said, any store algorithm that values more highly apps that have paid the most money to be listed rather than factors around application quality does not excite me as a user or as a developer.
Perhaps, if paid search advertising budgets were somehow linked to the price of the app, it could be feasible. If Apple guaranteed to only charge developers if the advertising resulted in a sale, small apps could at least join in. Developers could simply apportion a fixed percentage of their app revenue towards the App Store marketing budget.
However, I don’t think much would change for the better if something like that was introduced. A new equilibrium would quickly arise where everyone would be forced to pay for Apple’s paid search listings to stay competitive with the wider market of apps, leaving the problem of ranking quality apps higher than the rest unchanged.
The same situation would arise where all developers are competing for the same eyeballs as before, except they would be paying a bit more for the privilege with Apple’s revenues receiving a tidy boost from developers being compelled to pay for premium placement in App Store search.
Let’s assume for now that we have system integrity (we don’t). How do we implement DRM? The truth is that DRM is impossible in theory. However, as engineers, we do not always have to follow theory. The secret of DRM is that, unlike other cryptosystems, you are not designing it to be secure forever. (Note for the pedantic: I know that no cryptosystem currently known would last forever, but if you can point out this fact, you also know what I mean.) Specifically, if your DRM can last 100 years, most people (on the engineering side) would be very happy. In fact, if you can provably do that, you would “solve” the problem of DRM. Most DRM schemes are designed with decades in mind (something that you might not admit to business people). That means we can commit some security faux pas that the textbooks would forbid. For example, security by obscurity is a tool here. If it takes the hacker 5 years to figure out your scheme, then by all means do it, because you just bought another 5 years. (But be warned that if you think it takes 5 years to crack the scheme, it likely will take 5 months.)
Standalone, this post is an interesting exploration into the 3DS’s security measures, the quest to prevent game piracy. As a provably-foolproof method for content protection has not been found, all DRM is about delaying people for as long as possible. At the end of the day, it’s just maths. This article is particularly relevant to my interests in light of recent happenings in technology. Hopefully, you can see the parallels between what Nintendo does and what Apple does with iOS devices.
When embroiled in debates over human liberties and public safety, there’s a tendency to pigeonhole the meaning of encryption as only the thing that protects personal data on your devices. In reality of course, encryption and hashing techniques are everywhere in technology. Assuming such a policy was enforceable, a ban encryption is not feasible. There are numerous, legitimate, reasons why a company wants to include encryption flows in a product.
Game DRM is just one case; Nintendo and others use cryptography to (attempt to) protect their games library from piracy. In the same way that the FBI is looking for compromises to gain access to the contents of an iPhone, hardware hackers meticulously reverse-engineer the workings of Nintendo’s consoles to break the software security policies. When successful hacks are found, Nintendo releases newer hardware with different security protocols and the hackers get to work gain. This is exactly what Apple does with its iPhones.
Researchers at Washington University in St. Louis in 2012 found that laptop note-takers tested immediately after a class could recall more of a lecture and performed slightly better than their pen-pushing classmates when tested on facts presented in class. They reported their experiments with 80 students in the Journal of Educational Psychology.
Any advantage, though, is temporary. After just 24 hours, the computer note takers typically forgot material they’ve transcribed, several studies said. Nor were their copious notes much help in refreshing their memory because they were so superficial.
I wouldn’t put too much faith in a study of just 80 students but it’s interesting enough to warrant a discussion nonetheless. The survey result mirrors my experience, at least. I memorised many pages of essays in a matter of hours, less than day of total work, for university exams. I did this by writing my notes out, over and over again. It’s boring and repetitive, but it definitely meant I could remember them. When it came to the actual exam, I could write out vast passages of things I had learnt by rote — word for word if I had to. A few of my friends had this exact same tactic and it seems to work for them as well.
This study covers learning directly in lectures. I did use my laptop for several classes to make lecture notes. Indeed, I found that straight after I did forget what we had covered. It wasn’t a matter of missing stuff; my proficient computer ability and WPM rate meant I covered everything. However, the advantages and convenience of typing far outweighed the advantages of immediate recall. I settled on a strategy where I would use my laptop to write notes for lectures but when it came to test revision, I would use pen and paper.
I think what happens is your brain spends so much energy as a scribe such that you enter a mode where words come in one ear and out the other. It is robotic output — not really thinking about what the words mean. When you write stuff, you are physically moving your hands more and have to look at what’s being written. Abbreviating or paraphrasing passages also plays a role, where you are having to actively think about the concepts at hand compared to simply copying sentences.
It would be interesting to see a study of taking notes with people who aren’t fast typists, but instead have to look at the keys and tap out each word. I would guess this more closely replicates the mental experiences of writing.
Personally, I found writing out complete paragraphs into notebooks far more efficient than making actual notes. Writing full sentences helped me remember it better than shortened phrases or bulleted lists. It was slower to do (plus quickly makes your hand ache) but it was effective.
Being able to remember stuff at university is a hugely important skill but it isn’t everything of course. I found that as long as I understood something once initially, later it was far easier to write it out and get back up to speed. The ability to quickly conjure up introductions and conclusions also helps a lot, even if you don’t really believe what you are writing. I did well by basing my arguments more around what subset of information I could recall on the day, rather than walking in with a fixed opinion and flailing around in my head for evidence.
In many examinations, what I had remembered was very reusable as professors often set similar questions year-to-year. This helps with time constraints too, as it’s far quicker to scribble something word-for-word than construct arguments and explanations on the fly. In other instances, I had to engage the mind more to actually make an essay that fits the question set but writing still let me recall the facts.
Now for the super obvious part but some thoughts to follow: a smaller display means iOS must be a lot more compact. That part we all could have guessed, but I’ve also decided that the medium-sized 4.7-inch iPhone is the design target for iOS while the 4-inch and 5.5-inch iPhones adjust in either direction. Stretch out more than the default, or squeeze in more than the default. That being said, I do believe the 4.7-inch iPhone is likely the go-to model for most people, before you consider screen size preferences, price, and other specs.
It’s very trendy in the Apple blogger community to say that the 4-inch size is a better phone. This is justified because of the phone’s physical advantages, as in the iPhone 5 chassis fits more comfortably in pretty much everyone’s hands. No doubt, the phone is easier to hold. The iPhone 5 advert about thumb size, which Apple conveniently forgets now that its flagship devices are much bigger, is perfectly true.
However, the same people that argue for the smaller screen because of its physical usability disregard the important points of software usability, where the iOS user interface feels constrained and small. As Hall notes, there’s a strong feeling in how iOS is engineered that the target size for iOS design is the 4.7 inch display … with secondary adjustments for larger and smaller afterwards.
This is true in terms of system components (Springboard, Control Center, tab bars) and the third-party app ecosystem. Living day-to-day with a Twitter client where two tweets barely fit in a viewport is rough, especially when accounting for the fact a lot of social media posts feature tall attachments like photos and other embedded previews. There’s a natural tradeoff at play that will exist for many years, until technology develops things like stretchable, flexible, screens. If my only choices was a 4 inch phone or 5.5 inch phone, I would pick the big phone every time because of the larger canvas.
Thankfully, that choice of extremes is only a theoretical one: Apple sells a 4.7 inch iPhone. For me, this is the obvious choice between the 6 and 6 Plus options and continues to be the best compromise of usability in light of the iPhone SE. The additional diagonal dimension yields 40% more screen area. This makes a huge difference when using apps and when viewing media. Most people I know can also use the 4.7 inch chassis one-handed. I can reach all four screen corners with my thumb comfortably, lifting the phone gently to hit the top-right region. This is incredibly natural and beats the Plus phones all day long. Minor hand gymnastics bests two-handed operation.
Safari Technology Preview is a standalone application that can be used side-by-side with Safari or other web browsers, making it easy to compare behaviors between them. Besides having the latest web features and bug fixes from WebKit, Safari Technology Preview includes the latest improvements to Web Inspector, which you can use to develop and debug your websites. Updates for Safari Technology Preview will be available every two weeks through the Updates pane of the Mac App Store.
Safari Technology Preview takes a snapshot of the WebKit nightlies, about once a fortnight, and packages into a working web browser. Safari Technology Preview is not going to be as stable as a formal public release but the idea is that it is stable enough, such that web developers could use it as their primary browser.
By comparison, WebKit nightlies are often very buggy and using that as your daily browser is just unfeasible due to the high chance of it crashing. Safari Technology Preview also packages in iCloud support and automatic updates through the Mac App Store (even though you initially download the app from a website).
It’s a convenient way to stay closer to the bleeding edge of web technologies. If this sounds familiar, it’s almost identical to the strategy Google uses for Chrome, with Chrome Canary builds. Safari Technology Preview (with its annoyingly wordy name) also comes with a slick purple sundial its dock icon. So there’s that.
Sony Corp. is planning to sell a more powerful version of its PlayStation 4 machine to handle higher-end game experiences, including virtual reality, people familiar with the matter said, while continuing production of its existing console that has so far sold more than 36 million units world-wide.
Sounds like an iPhone release cycle, an ‘S’ model. It’s not much of a surprise if you follow the industry, Microsoft has also announced plans to treat the Xbox more like a personal computer.
This would mean incremental upgrades to the hardware (internal components) periodically. This isn’t completely unprecedented — both Microsoft and Sony have released stop-gap ‘new’ hardware to spike sales in the console cycle, through ‘Slim’ redesigns. Those were more aesthetic upgrades than anything; the body became thinner and sleeker but the internal components remained the same as the 1.0 hardware.
The Wall Street Journal report says the new upgraded PlayStation 4 will have the same games catalogue as the PlayStation 4 but it isn’t clear how the games would benefit would from the additional GPU and CPU enhancements. Any change in internals will cause some frictions for developers, who will have to dedicate additional engineering resources to optimise for the improvements provided by the better components. It did indicate that the new strategy is partly to account for the upcoming virtual reality headsets, which demand more compute power by their nature.
I also think that the move to minor, iterative hardware updates is also a counter to the world of smart devices. The graphics chips inside iPhone, iPad and Apple TV (arguably the closest competitor to traditional games consoles) are catching up quickly. If games console makers continued their old-style plans of elongated seven-year cycles, they would likely be surpassed by smartphones, tablets and set-top boxes before the next-generation flagship consoles came around.