Sony Buys Gaikai: A Solid Investment In Future Services

This week Sony, or more specifically Sony Computer Entertainment, bought Gaikai — the streaming game service. Rumours of a tie-up had been circulating prior to E3, and Gaikai had made no secret that it was on the market for around $500 million. The $380 million Sony paid is well under that, but even so it must have been a difficult decision given the Sony group’s current performance.

What does the purchase mean for Sony and the wider gaming market?

  • Sony is buying networking and service platform expertise . . .  Sony has struggled long and hard with online services and software: its PlayStation network is now robust but suffered an embarrassing hack attack last year, while its PC and phone software (Media Go, PlayStation certification for phones) seems to lag a generation behind folks like Apple or even Microsoft. Gaikai’s core networking and service delivery expertise can fix many of these issues in a relatively short time (months rather than years).
  • . . . as well as console backward-compatibility. Despite consistently offering by far the best access to and support for older titles of today’s three platforms, Sony has long been the recipient of gamer complaints about the removal of backward-compatibility as it has released new hardware iterations of the PS3. Streaming potentially allows both backward-compatibility for today’s PS3 and, potentially more intriguingly, for the future PS4 — allowing it to run today’s PS3 games without additional hardware.
  • Non-console devices can join the game. While not explicitly stated as an aim for Sony Computer Entertainment, its rich gaming back catalogue, along with Sony’s engineering expertise in PCs, TVs, tablets, and phones, means that PlayStation games could now come to all of these platforms. This would provide a USP (if kept exclusively to Sony hardware) and an additional revenue stream for games with little additional investment.
  • Where does this leave Microsoft? Microsoft is already working with OnLive, the rival (and arguably more well-known) game streaming service. However, the relationship has been rocky at times (see this). Does Sony’s news justify Microsoft engaging more here — or even considering an acquisition? Probably not, if Microsoft (along with investors like HTC) can get ready access to the technology as ‘partners’ – Microsoft is already much more competent at online execution in gaming.
  • Connectivity will need to take the strain. One thing is for sure, users will need solid, fast, low-lag broadband connections (and in-home wiring/wireless) to make any of these streaming services work consistently. Netflix and Hulu sometimes struggle with one-way traffic when streaming video into the home; gaming services need to do this as well as upload user actions and act on them at the server end. Let’s also not forget that consumers surrender some of their control with these services — starkly illustrated by the storms last week that took chunks out of the Amazon cloud. This is slightly inconvenient if you want to post your latest wedding dress photo to Pinterest; it’s disastrous if you are 3 to 4 hours into a streamed gaming session without a local save!

Microsoft Surface Tablets: A Core Strategic Change For Microsoft.

Yesterday, in a much-hyped announcement, Microsoft unveiled its future tablet offerings — both an ARM and an Intel-based 10.6-inch tablet running their respective flavours of Windows 8.

While no pricing details have been announced, these aren’t budget devices: they are likely to be on par with premium Android tablets ($500) and the iPad ($650) for the ARM-based device and approaching Ultrabook prices ($700 to $1000) for the Intel-based tablet. I’ve written previously about how the tablet war wouldn’t really kick off until Microsoft arrived, so what can we draw from these initial announcements?

  • The Intel tablet will clearly be targeted at business, at least initially. Microsoft has been running interference over the past two years around businesses adopting iPads as core employee devices. When the Intel-based Surface ships (probably early 2013), it will at last have a proper solution for its big enterprise customers — and one that it can supply directly rather than relying on the vagaries of OEM support. Given a reasonable price point, proven compatibility with legacy Windows applications, and robust security and remote management abilities, I would envisage high levels of interest in the product.
  • Ouch! Talk about kicking OEMs when they are down. Dell, HP, and Acer are all reporting poor financials, mainly thanks to the lacklustre PC market (Lenovo is an exception here, doing rather well, thank you very much). Imagine you are in their position; suddenly, the biggest software supplier you work with has decided to build hardware — and not just any hardware, but the new premium form factor you were planning to use to relaunch your business. Sure, this might “prime the pump” for Windows 8 tablets from other OEMs (as Ballmer hopes) or it could be like partnering with Nokia on phones — instantly alienating other manufacturers like HTC, Samsung, etc.
  • The Windows RT Surface — hmm. Windows RT on ARM seemed like a great idea when announced last year, but in the subsequent months, Intel has pulled a rabbit out of its hat and got x86 architectures performing almost as well as ARM while not being power hogs. So, you’ll now have the choice of a premium ARM tablet running Windows RT (admittedly with free MS Office) but doing little else that people would recognize as Windows — or an x86 tablet running “proper” Windows 8 with full (or nearly full) backward compatibility for more or less the same price. This is not a difficult choice. Admittedly, given that Microsoft is targeting businesses initially with the x86 tablet, its version will be more expensive, but expect one of the OEMs to have a cost-comparable 10-inch tablet running full Windows 8 at or just after launch.
  • It’s time for Android to step up. Android tablets have represented the only really viable alternative to the iPad to date, and yet most have failed to make a mark with buyers. We’re finally getting some good devices (like the ASUS Transformer and Samsung Galaxy Tab), and the Google Nexus tablet is — allegedly — just around the corner. If manufacturers (and Google, of course) want to stay in competition, they need to up their game and produce more stable, aggressively priced devices that can either undercut the Windows/iOS devices (like the Amazon Fire) or offer something better.
  • Are apps the be all and end all? Much discussion is already centring on whether the Surface tablets will have a sufficiently developed apps marketplace to thrive. Certainly, the iPad has been driven by the legacy success of the iPhone apps marketplace; certain categories of applications, such as games, social media clients, and photo manipulation, figure highly in terms of what people use their tablets for. Given that this is effectively a new platform, the ARM Surface will need apps to survive, but the x86 Surface may be able to flourish (at least initially) without this. Why? Windows 8 (on x86) will be the first OS designed for a tablet with backward compatibility (and no — backward compatibility with a phone doesn’t count); on day one, it will already have access to more apps than all the other platforms (although, admittedly, many of these won’t work well with the Metro UI out of the gate).

Overall, while we’re still awaiting vital details, the Surface announcements do at least show that Microsoft is prepared to make a major strategic shift into hardware to protect its position. I have high hopes for the x86 Surface (and the competing OEM that it might spur), but I see the ARM Surface device as falling between multiple stools — a tiny apps market, not as polished as an iPad, not as cheap as an Android device, and not as practical as its own stablemate.

E3 2012: A Quiet Year For Videogaming

E3 took place in LA last week, perhaps for the last time, but it failed to really hit the headlines in the way it usually does. Why? Well, as expected, it was a very quiet year for announcements, with most firms recognizing that now is not a great time to heavily invest in the industry (see the recent game retail crisis, etc.). It was common knowledge that Sony and Microsoft were unlikely to announce new consoles, but even Nintendo failed to excite, despite the Wii U coming out later this year. However, there was some interesting news aside from the inevitable announcements of game title sequels.

  • Microsoft focused on the “home entertainment” angle. The Xbox has always been a potential Trojan horse to get Microsoft into consumers’ living rooms — and it demonstrated this strategy at this year’s E3: new music services, deals on video streaming, and, most interestingly, SmartGlass technology to link various Microsoft-based platforms.
  • Sony played it straight. Along with some new game announcements (mainly sequels, of course), it announced a revamp of PlayStation Plus — adding more free full games to make the service even better value. Sony’s interesting new product was Wonderbook: Book of Spells, an augmented reality (AR) book tied to the Harry Potter franchise that works with the Move peripherals. Sadly, while Sony has years of interesting AR/video products (dating all the way back to EyeToy in 2003 and EyeToy:Chat in 2005), these never seem to draw in consumers in sufficient numbers.
  • Nintendo snatched defeat from the jaws of victory. It should have walked away with the conference, but instead it failed to impress — failing to confirm pricing or launch details for the Wii U. Still, we got Pikmin 3 — finally! Luckily for Nintendo, at least some of the third-party publishers announced some interesting Wii U titles.

Elsewhere, the show highlighted a slew of sequels from the major publishers and the continuing resurgence of the indie developer sector. Most interestingly, Peter Molyneux’s new firm 22Cans announced Curiosity; it’s not really a game but more of a social media experiment. Elder Scrolls Online also got its first real showing. Whether the franchise can reverse the ongoing trend toward free-to-play (F2P) MMOs remains to be seen; it’s a strong brand but, arguably, not Star Wars strong and The Old Republic is losing subscribers.

What is E3 good for?

Slow years like this inevitably lead to questions about whether E3 is as relevant as it once was. After all, many of the new game announcements were trailed or leaked prior to the show; with so many online sources (Eurogamer, Joystiq, Kontaku, Spong) covering gaming every day of the year, E3’s no longer a great way of getting that big-hit mainstream press coverage. However, E3 is:

  • Great for doing proper business. While the gaming media (and gamers) bemoaned the move to a much smaller show in Santa Monica in 2007 as lacking in glamour, you can bet just as much useful business was done between distributors, retailers, developers, and publishers.
  •  A useful date in the diary for an industry temperature check. E3’s June date puts it right at the point when vital Q4 titles and hardware have been finalized — meaning distributors, developers, and the media get hands-on with near-final game builds or hardware. Admittedly, given that some titles have already slipped to 2013, the usefulness of the timing has been somewhat diminished this year.
  • A great venue for the whole gaming ecosystem to have a meeting of minds (hopefully). E3 was born in the PC gaming age, just as consoles were enjoying their second coming (e.g., original PlayStation, Sega Saturn). It has continued to be dominated by these platforms — mostly the consoles and their portable stable mates. While recent years have seen some embracing of mobile gaming, the booming casual/social game market hasn’t been particularly well represented. This is changing: Zynga was at the show this year for the first time — albeit on more of a recruitment drive rather than to demonstrate its wares — and the pace of change should accelerate, turning E3 into a truly platform-agnostic forum for the industry.

So, When Do The Tablet Wars Start?

The iPad is a true phenomenon, selling around 70 million units since launch and projected (by Gartner) to reach up to 169 million units per year by 2016. It has demonstrated the consumer (and, potentially, business) desire for a simpler device that delivers a fantastic media “consumption” experience in conjunction with simple yet compelling apps.

Android tablets and Windows 7-based tablets have also been around for some time, so you’d have thought that the tablet “war” would have started already. Not so much. There have been a couple of false starts: the Samsung Galaxy Tab, BlackBerry PlayBook, and HP TouchPad — the latter two briefly even outselling the iPad in certain segments/markets, but only after “fire sale” discounting — have all been heralded as serious challengers but have failed to make an impact. These were certainly no more than “skirmishes” rather than an all-out war.

The Amazon Fire made some inroads in Q4 2011, extending the firm’s e-reader device line, but this seemed to wither on the vine in Q1 2012. New devices like the Asus Transformer and second-generation Samsung Galaxy tablets seem to be better received, and Google’s own tablet may arrive soon. These will, doubtless, cement Android’s position (based on cumulative sales) as a significant second-place player. But the tablet war won’t really heat up until Microsoft hits the market with both Windows 8 RT and Windows 8 tablet devices.

Microsoft needs Windows 8 and Windows 8 RT to work straight out of the gate.

Windows RT on ARM architectures will provide a proper Metro-driven, Windows-like tablet — one better than those cobbled together with Windows 7 to try and keep business clients from buying iPads — and at a price point (hopefully) comparable with other tablet offerings. Meanwhile, if you need real Windows on a tablet with proper backward compatibility, Windows 8 tablets with x86 architectures should arrive at around the same time. Pricing on the latter is likely to start high and then trickle down as component prices drop; it’s also where we’ll see interesting “hybrid” devices like laptops with touch screens and tablets with slideout keyboards.

It’s a bold move and, arguably, one that Microsoft should have made last year; Windows RT will introduce a lower-cost iPad competitor with a good user interface (UI) and some legacy compatibility (for Office docs), but it may end up as just another Zune HD — superior to the iPod in terms of hardware and UI but gaining zero traction in the market. Similarly, Windows 8 tablets could be far too expensive; if they cost more than a decent laptop and iPad combined, it’s hard to envisage rational IT managers or brand-conscious consumers opting for the untried tablet.

Perhaps this is why forecasts from the likes of Gartner and DisplaySearch see iOS as the leading tablet platform all the way out to at least 2017, with Android only gaining ground slowly and Microsoft performing poorly (according to Gartner) or atrociously (according to DisplaySearch).

It’s too early to call a winner in the long term.

The truth is that with no international market for the Kindle Fire yet, only rumors of the Google tablet, and no pricing on details for either flavor of Windows 8 tablet, it’s too early to announce the winner of this war. Apple heads into the conflict with tremendous momentum and economies of scale, but the same could have been said of Sony, Kodak, or Atari in the past. The key questions will be:

  • Who will deliver a tablet that supports those neglected usage scenarios (transactions, work stuff, communications)?
  • What will be the difference in price points between Windows RT devices and entry-point x86 Windows 8 tablets? Will all Windows 8 tablets be “transformer” or hybrid models that have slideout keyboards . . . or will there be a mainstream, pure tablet offering based on x86 architecture?
  • How long will there be manufacturers with feet in both the Windows and Android camps? Will we see this breaking down, as per today’s “PC manufacturers” and “smartphone manufacturers”, with just a few firms (Samsung, Apple, Sony) being global players in both?
  • Who is going to explain to the poor consumer standing in a PC retailer the difference between and unique benefits of: 1) a traditional notebook running Windows 8; 2) an Ultrabook with a touch screen running Windows 8; 3) a tablet running Windows 8; 4) a tablet running Windows 8 RT . . . even before we factor in Apple devices, Android tablets, hybrid Android devices, and Chrome OS laptops!

The Future Of The Digital Home

(see the previous 3 posts for background; part I, part II, part III)

Is the concept of the digital home redundant now? Have events bypassed “something that never was”? No. While the need for a self-contained system in a consumers home with storage, intelligence and management may have been superseded by high bandwidth / availability broadband and cloud services, the things that I believed consumers would need from digital services, devices, and applications are still true — and in many cases still haven’t been provided by today’s technology.

Future trends that fall under the “digital home” umbrella include:

  • The fight for “aggregation hubs.” Streaming services have made an impression, and the seamless delivery of content — ranging from e-books to videogames and device applications — now happens as a matter of course. However, these services are still fragmented; a range of suppliers (Amazon, Steam, iTunes) requires different interfaces and supports different client devices. Global titans like Google, Apple, Amazon, and even Microsoft want to bring all this together via individual or household user accounts that tie together all your legitimate movies, music, applications, and e-books. The successful firm becomes a trusted resource for the consumer — and can corner the market in upselling or advertising to them.
  • Network refinement. Wi-Fi still isn’t the networking nirvana that device makers would have you believe; at the very least, it can be complemented with other technologies like NFC, Bluetooth, or 4G, and perhaps even those low-power technologies like Z-Wave and ZigBee will finally come good (although I’m not holding my breath!). But as we reach a point when gigabytes of data could be moving to and from devices in the home on a regular basis, Wi-Fi may hit its capacity limits. Shifting to a powerline-based network or wired backbone may be the only way to keep up with traffic demands.
  • Storage and application migration to the cloud. Today’s browser-based applications and social networks already run across multiple devices without ever leaving the cloud, but traditional applications will increasingly do the same — be it Office 365, photo-editing packages, or gaming via OnLive or Gaikai. The advantages of online version control, storage, subscription models, and easy sharing make the locally installed software package look increasingly redundant, while the lack of optical drives in devices like Ultrabooks or tablets makes installation from disk very tricky. Online storage is already going this way as Dropbox, SkyDrive, iCloud, and Google Drive compete for consumer attention.

Even without these specific areas of focus, there is still mileage in the concept of greater inter-operability between devices and services – maybe Microsoft Research is on to something with its HomeOS, but this would take many years to achieve a critical mass.

Whatever Happened To The Digital Home? Part III

(carrying on directly from my previous post)

Stuff that didn’t even occur to me:

  • Tablets. This isn’t a great shock; forecasts and models are all based on evolutionary change to existing ecosystems and technology. Apple’s iPad success was a revolutionary change that no analyst could have predicted. Interestingly, the iPad is a sort of half-way house between the old PC-based home and the potential digital home, offering an easy-to-use, flexible consumption device that hides all that techie stuff. It helps that in iTunes, Apple has delivered the equivalent of another concept in the report: the third-party media/content aggregator.
  • Social networks. Like tablets, the rise and rise of the likes of Facebook, and Twitter has fundamentally changed consumers’ relationship with their home technology. A PC, TV, or mobile phone in the home is now merely a gateway to accessing friends and the wider community rather than a solution in itself. Arguably, this is a far healthier relationship, aside from the desperate need to communicate absolutely everything, obviously.
  • The move to web-based services and then back to apps. This is an interesting one to consider; as a consumer technology analyst, I naturally expected digital home experiences to be delivered via installed software — software that was perhaps even installed at the factory for devices like TVs. The growth of Flash, HTML5, Ruby on Rails, etc. meant that many services and experiences were delivered via a browser. This makes sense in retrospect: once a compatible browser is available on a device, a service becomes available with very little (if any) tweaking — much better than having to rewrite for every architecture or operating system. More interesting still is the reversal of this trend as app stores and downloadable apps for phones, tablets, and PCs aim to “monetize” consumer service delivery; you lose some of that web browser compatibility if you code directly for iOS, Android, or Windows, but you gain consumer engagement (and, potentially, direct revenue).

(I’ll finish off this series of posts with a look to the future — is the digital home a redundant concept?)

Whatever Happened To The Digital Home? Part II

(carrying on directly from my previous post)

Things that failed or haven’t happened yet:

  • Video chat. While Skype and its competitors have done very well on PCs, it’s still not the ubiquitous video chat (via TVs, phones, game consoles, etc.) that I had envisioned and that would get us beyond today’s tech-aware audience and into every home. It will be interesting to see where this goes in the future as Microsoft adds functionality to Skype.
  • Centralized storage. I’ve used NAS devices and home servers for nearly a decade, and this may have blinded me to the fact that most consumers still rely on local PC/phone storage for sole copies of their content — with perhaps an external hard disk for back-up if you’re lucky. Conceptually, the idea of a dedicated storage device on the home network is still the sole preserve of techies and content hoarders; arguably, the window of opportunity for folks like Netgear, Synology, and QNAP to engage with a more mainstream audience is closing, as online storage services like Dropbox, SkyDrive, and Google Drive will eventually render local storage redundant. Additionally, the need to generate storage efficiency has decreased as memory costs have plummeted: in 2004, a 250 GBhard disk cost $250 according to this great cost comparison; you can now get 3 TB drives for much less than that if you shop around. This has meant that building several gigabytes of storage into every device (phone, DVR, TV, camera) is easier and more cost effective than having a central store — even if this does lead to massive duplication and version control nightmares.
  • Voice control. This idea was thrown into the mix to spice it up, as consumer-based voice control seemed fairly unlikely in 2004. Sure enough, there still aren’t any convincing multi-device voice control technologies in people’s homes, but we’re not far off in terms of the underlying technology — Xbox Kinect and Apple’s Siri are starting to show that this kind of thing can work in a limited capacity.
  • Connected appliances. We’re still no nearer to the “Internet-enabled fridge” than we were back in 2004. The downsides of high cost, long replacement cycles, and perceived lack of utility still outweigh the potential upsides — the kitchen sees the most traffic in the house, it’s a good place for a Wi-Fi router, and it offers appliance maintenance benefits. The recent failure of Chumby — with its cute connected display/alarm clock/app store that failed to find a market — demonstrates the risks associated with razor-thin hardware margins. But there is still hope: the excitement around the Nest Learning Thermostat last year and the potential applications of maker-type technology like Raspberry Pi or Arduino in this space means that we may yet see dumb technology replaced over time.
  • That “brain” to manage the digital home. As storage has become super cheap and Wi-Fi the near-universal networking standard, the management of more centralized storage and more complex networks hasn’t really been needed. Add in the growth in streaming to individual devices — effectively a point-to-point delivery from the content provider — and the intelligence needed to manage the digital home becomes redundant. The closest we have to this today is Apple’s device and iTunes ecosystem; loading multiple devices, managing streaming, and offering (for the more technically minded) network back-up solutions, it has become a default “brain” for those buying into an Apple-centric home. Again, more intelligence management would allow better back-ups, more seamless content sharing, and fewer “Why won’t video X play on device Y?” frustrations — but it’s difficult to see who would provide this now that so many devices manage their own connectivity and content.

(next up; what was unanticipatable when the digital home concept was first created)

Whatever Happened To The Digital Home? Part I

Just over 8 years ago, I wrote a Forrester report titled “A Manifesto For The Digital Home,” outlining what needed to happen from a consumer’s perspective for the true “digital home” to become a reality. (We defined the digital home as a single, unobtrusive network environment where entertainment, communication, and applications could be shared across devices by multiple household members.) A lot has changed in the intervening years, but are we really any closer to that reality now?

From a consumer’s perspective, I hypothesized that four things needed to be in place to make the digital home a mainstream reality: flexibility (of connection, exchange, and ease of use); control (of sharing, data privacy, and what goes where); security (of personal information, bought content, and communications); and mobility (of devices, applications, and content). Of course, all of these needed to be underpinned by affordable technology and desirable content and applications.

For this to work, the digital home needed five key technology elements: a network (or, more likely, multiple seamlessly bridged networks); great interfaces on multiple devices; centralized storage; some form of central management function with the intelligence to manage the network, storage, and access issues; and great content that had been “digital-home-enabled” — i.e., able to be shared, backed up, and transcoded without licensing or technical issues.

Some things I got right:

  • Device-agnosticism. More and more stuff will run across a variety of devices. Interestingly, this has been driven by social media and content owners promoting browser-based or streaming solutions rather than (as predicted) standards organizations or by an altruistic streak in the hardware manufacturers — most of those efforts have got bogged down in copy protection or years of certification.
  • Streaming content. Referred to somewhat quaintly as “broadband VOD” at the time, the streaming of content has taken off in a big way in major markets, mainly to prevent other distribution methods (legal or otherwise) taking hold. Advances in broadband speeds and compression technologies have exceeded even my optimistic expectations at the time.
  • Easy networking. This has happened, sort of. Surprisingly, instead of the vision of a co-operating set of network technologies working together where they are best suited (3G/4G outside the home, Wi-Fi for computing, ZigBee/Z-Wave for appliances, etc.), we’ve ended up with faster Wi-Fi crammed into pretty much all devices with 3G as the “just works but it might be expensive” fallback. This certainly makes the network topology easier, and attaching to secure Wi-Fi routers is much easier today than it was 8 years ago. But I can’t help feeling we’ve missed a trick here; the reason those low-power, short-range solutions existed was to facilitate much broader connectivity without security or configuration issues. In addition, Wi-Fi is still an expensive option (both in terms of power and components), and this has held back the networking of non-traditional devices.

(I’ll continue this series with analysis of stuff that didn’t happen as expected and what has happened that couldn’t be anticipated in my next post)

Online Cloud Storage: Future Table Stakes Or Killer App?

Google has at long last officially announced Google Drive, and tech blogs are awash with comparisons to Dropbox, iCloud (slightly unfairly), SkyDrive, and other cloud storage services. The early consensus seems to be that SkyDrive just wins out in terms of free storage and incremental paid storage (particularly if, like me, you already had a SkyDrive account and opted in to the free 25 Gb capacity upgrade), while none of the main platforms support all the clients that you may have been hoping for (omitting Linux, Android, iOS, or Windows Phone depending on which platform you’re looking at).

This explosion in available online storage has looked inevitable ever since Dropbox (and several other firms) really hit home with simple desktop folder-like services that don’t try to do too much (sync calendars, offer workflow solutions, etc.). Security experts will argue about whether the encryption is up to snuff (it isn’t), but most consumers will be storing personal (non-confidential) material on there anyway.

Arguably, we’re only at day 1 of the real competition. Features (and third-party clients) will be added, the free storage amounts will (inevitably) increase over time, and different business models and audience segments will emerge — for example, services for SMB customers are already available.

The key question, though, is whether these firms can make a business out of this. In the short term, certainly — as long as they’re offering something that isn’t free elsewhere (remember those “premium” web email services that offered more storage before those limits pretty much disappeared — thanks, Google) or that has better functionality/is easier to use than the competition (Dropbox still scores well here). The problem for the pure-play offerings is that when storage becomes just another feature of Microsoft’s, Google’s, Amazon’s, or Apple’s online offerings — most of which are free or wrapped up in one easy subscription — the justification for paying separately for the service disappears.

This is where the dreadful “stickiness” term comes into play: Dropbox, ADrive, JustCloud, SugarSync, and hosts of others need to fight to make their service so attractive (or difficult to give up) that continuing to pay a reasonable fee seems the best option. But this is tricky; they can’t offer more and more storage, and erecting barriers to prevent consumers moving their files elsewhere defeats the whole object of the service. In fact, as the once-superior Dropbox client shows, any advantage is likely to be short-lived. One possible key to survival is making the storage useful to the user’s social circle, not just the user. I’m less likely to move my thrilling 4-hour video of the kids’ last birthday party if it means I have to bring the grandparents up to speed on how to register for and access a new online storage solution. Dropbox is introducing direct links to customers’ shared files, which is a nice step in this direction.* Its referral program’s offering of extra storage for each person you get to sign up has also swelled its customer ranks nicely to 50 million people – that’s a lot of people, and unlikely to decline too rapidly.

However, I’m not convinced that the best route for Dropbox and its ilk beyond the next 12 months isn’t to get bought by the likes of a Google or Microsoft looking to grow their own user base. An alternative, for the more ambitious pure plays, would be to partner into an emerging ecosystem to fight the established players; combine online storage with a social network, Twitter client, location service, and mobile data plans, and suddenly you are looking at a compelling bundle. Unfortunately, most of these other apps are free to use and already have privacy concerns, so online storage of personal files may not fit well with this. Google will have to face this challenge itself.

* (In fact, Dropbox’s official blog pretty much uses a [less cynical] word-for-word version of the previous example, which I’ve only just looked at, honest!)

What It Means: The Failure Of Game Retail For Publishers And Platform Owners

As discussed in previous posts, game retailers have to radically change their strategy if they are to survive on the high street, but what does this major shift in consumer buying habits and, potentially, retailers’ strategy mean for the titans of the videogame world: publishers and platform holders?

The good news:

  • More direct digital sales. A decrease in the physical availability of the product is bound to spur the (already growing) trend in digital downloads — particularly for more obscure titles or add-ons that are unlikely to be stocked/discounted by non-dedicated game retailers. The boom in indie PC games is a clear example of this already happening; boxed PC games have been a highly fragmented market prone to piracy for years, and systems like Steam have enabled otherwise unlikely titles to make it big via secure digital distribution.
  • The long-term decline of the secondhand market. As previously discussed, publishers have long considered secondhand games a thorn in their side, diverting sales from new titles — or so the theory goes. While an online secondhand market will continue to grow, the disappearance of high-street stores with lots of available secondhand titles (often shelved next to the same title, new) reduces impulse-buying opportunities.
  • A smoother supply chain. Obviously, digital sales don’t require holding inventory; in addition, much of the complexity of distribution, credit facilities, and returns will disappear if physical boxed games end up being distributed mostly via two or three massive online stores and major chains/supermarkets. However, there are significant downsides to dealing with only a few firms like WalMart, Tesco, or Amazon — see below.
  • Direct engagement with customers (or at least better information via partners). What do you, as a publisher, know about your end customer — or how many units were bought in a particular state? Perhaps a buyer is tied into your loyalty program or online service — but that doesn’t tell you where they bought from. By simplifying the supply chain and even selling digital goods directly, you gain insight into the buying behaviour of your customers and should be able to respond more quickly and effectively to their needs. Whether the big retailers like Amazon will share this information (even for a fee) is trickier; it depends whether they view the data as a revenue opportunity or a strategic advantage.

The bad news:

  • Supermarkets and multi-category retailers become the primary physical retail outlets. You may have simplified your supply chain, but when Wal-Mart becomes responsible for 50% of your title sales, you become overly reliant on its largesse. And firms like Wal-Mart and Tesco negotiate hard for discounts. A secondary consideration is that, like books, videogames will become a loss leader for multi-category stores: pull punters in with $10 off Mass Effect 3 and then sell them $200 of groceries. As a publisher, you still get your revenue, but this exerts downward pressure on price points and devalues games.
  • Online retail is still a mixed blessing. The gold rush in online shopping is largely over for most categories, including videogames. A few, well-behaved retailers dominate in multiple geographic markets; they don’t tend to discount massively and do now take part in pre-order and limited-edition promotions. But their long-term strategy isn’t necessarily obvious. Could Amazon become a leading competitive digital game distribution service? Will eCommerce (and rent-by-post) players jump into the gap left by high-street stores for secondhand games? The answer to both of these questions is probably ‘yes’.
  • A short-term spike in the secondhand market. A key strategy (as I see it) for those struggling physical stores is to up their game in secondhand and trade-in games. While long-term publishers and platform holders may be able to cut off the air supply to this market with digital downloads and a reduction in the number of physical game disks/cards, that is going to take some time. Be prepared for struggling chains to keep pushing the boundaries in terms of what they see as their right to exploit this (more) profitable segment.
  • The high-street showcase disappears. Often overlooked — especially by people who see GAME and GameStop stores as somewhat grubby holes (guilty as charged!) — is the showcase that these venues provide for new titles and new game systems — however seemingly badly organized to an outsider. 3D-based systems are the clearest example here: you can’t demonstrate a 3DS on TV or YouTube; you actually have to play with one in-person. Ultimately, this also means that videogames cease to hold a special place in consumers’ minds (just like books and music) — dedicated stores where you can browse and be immersed in your hobby/obsession, rather than just picking up the latest Call of Duty while you do the weekly food shop.

Today’s videogame market is such that both publishers and platform owners will probably benefit most from a slow, graceful decline in high-street videogame stores rather than catastrophic collapses — even if the threat of the latter accelerates plans around disintermediation.