Whatever Happened To The Digital Home? Part II

(carrying on directly from my previous post)

Things that failed or haven’t happened yet:

  • Video chat. While Skype and its competitors have done very well on PCs, it’s still not the ubiquitous video chat (via TVs, phones, game consoles, etc.) that I had envisioned and that would get us beyond today’s tech-aware audience and into every home. It will be interesting to see where this goes in the future as Microsoft adds functionality to Skype.
  • Centralized storage. I’ve used NAS devices and home servers for nearly a decade, and this may have blinded me to the fact that most consumers still rely on local PC/phone storage for sole copies of their content — with perhaps an external hard disk for back-up if you’re lucky. Conceptually, the idea of a dedicated storage device on the home network is still the sole preserve of techies and content hoarders; arguably, the window of opportunity for folks like Netgear, Synology, and QNAP to engage with a more mainstream audience is closing, as online storage services like Dropbox, SkyDrive, and Google Drive will eventually render local storage redundant. Additionally, the need to generate storage efficiency has decreased as memory costs have plummeted: in 2004, a 250 GBhard disk cost $250 according to this great cost comparison; you can now get 3 TB drives for much less than that if you shop around. This has meant that building several gigabytes of storage into every device (phone, DVR, TV, camera) is easier and more cost effective than having a central store — even if this does lead to massive duplication and version control nightmares.
  • Voice control. This idea was thrown into the mix to spice it up, as consumer-based voice control seemed fairly unlikely in 2004. Sure enough, there still aren’t any convincing multi-device voice control technologies in people’s homes, but we’re not far off in terms of the underlying technology — Xbox Kinect and Apple’s Siri are starting to show that this kind of thing can work in a limited capacity.
  • Connected appliances. We’re still no nearer to the “Internet-enabled fridge” than we were back in 2004. The downsides of high cost, long replacement cycles, and perceived lack of utility still outweigh the potential upsides — the kitchen sees the most traffic in the house, it’s a good place for a Wi-Fi router, and it offers appliance maintenance benefits. The recent failure of Chumby — with its cute connected display/alarm clock/app store that failed to find a market — demonstrates the risks associated with razor-thin hardware margins. But there is still hope: the excitement around the Nest Learning Thermostat last year and the potential applications of maker-type technology like Raspberry Pi or Arduino in this space means that we may yet see dumb technology replaced over time.
  • That “brain” to manage the digital home. As storage has become super cheap and Wi-Fi the near-universal networking standard, the management of more centralized storage and more complex networks hasn’t really been needed. Add in the growth in streaming to individual devices — effectively a point-to-point delivery from the content provider — and the intelligence needed to manage the digital home becomes redundant. The closest we have to this today is Apple’s device and iTunes ecosystem; loading multiple devices, managing streaming, and offering (for the more technically minded) network back-up solutions, it has become a default “brain” for those buying into an Apple-centric home. Again, more intelligence management would allow better back-ups, more seamless content sharing, and fewer “Why won’t video X play on device Y?” frustrations — but it’s difficult to see who would provide this now that so many devices manage their own connectivity and content.

(next up; what was unanticipatable when the digital home concept was first created)

Whatever Happened To The Digital Home? Part I

Just over 8 years ago, I wrote a Forrester report titled “A Manifesto For The Digital Home,” outlining what needed to happen from a consumer’s perspective for the true “digital home” to become a reality. (We defined the digital home as a single, unobtrusive network environment where entertainment, communication, and applications could be shared across devices by multiple household members.) A lot has changed in the intervening years, but are we really any closer to that reality now?

From a consumer’s perspective, I hypothesized that four things needed to be in place to make the digital home a mainstream reality: flexibility (of connection, exchange, and ease of use); control (of sharing, data privacy, and what goes where); security (of personal information, bought content, and communications); and mobility (of devices, applications, and content). Of course, all of these needed to be underpinned by affordable technology and desirable content and applications.

For this to work, the digital home needed five key technology elements: a network (or, more likely, multiple seamlessly bridged networks); great interfaces on multiple devices; centralized storage; some form of central management function with the intelligence to manage the network, storage, and access issues; and great content that had been “digital-home-enabled” — i.e., able to be shared, backed up, and transcoded without licensing or technical issues.

Some things I got right:

  • Device-agnosticism. More and more stuff will run across a variety of devices. Interestingly, this has been driven by social media and content owners promoting browser-based or streaming solutions rather than (as predicted) standards organizations or by an altruistic streak in the hardware manufacturers — most of those efforts have got bogged down in copy protection or years of certification.
  • Streaming content. Referred to somewhat quaintly as “broadband VOD” at the time, the streaming of content has taken off in a big way in major markets, mainly to prevent other distribution methods (legal or otherwise) taking hold. Advances in broadband speeds and compression technologies have exceeded even my optimistic expectations at the time.
  • Easy networking. This has happened, sort of. Surprisingly, instead of the vision of a co-operating set of network technologies working together where they are best suited (3G/4G outside the home, Wi-Fi for computing, ZigBee/Z-Wave for appliances, etc.), we’ve ended up with faster Wi-Fi crammed into pretty much all devices with 3G as the “just works but it might be expensive” fallback. This certainly makes the network topology easier, and attaching to secure Wi-Fi routers is much easier today than it was 8 years ago. But I can’t help feeling we’ve missed a trick here; the reason those low-power, short-range solutions existed was to facilitate much broader connectivity without security or configuration issues. In addition, Wi-Fi is still an expensive option (both in terms of power and components), and this has held back the networking of non-traditional devices.

(I’ll continue this series with analysis of stuff that didn’t happen as expected and what has happened that couldn’t be anticipated in my next post)

Online Cloud Storage: Future Table Stakes Or Killer App?

Google has at long last officially announced Google Drive, and tech blogs are awash with comparisons to Dropbox, iCloud (slightly unfairly), SkyDrive, and other cloud storage services. The early consensus seems to be that SkyDrive just wins out in terms of free storage and incremental paid storage (particularly if, like me, you already had a SkyDrive account and opted in to the free 25 Gb capacity upgrade), while none of the main platforms support all the clients that you may have been hoping for (omitting Linux, Android, iOS, or Windows Phone depending on which platform you’re looking at).

This explosion in available online storage has looked inevitable ever since Dropbox (and several other firms) really hit home with simple desktop folder-like services that don’t try to do too much (sync calendars, offer workflow solutions, etc.). Security experts will argue about whether the encryption is up to snuff (it isn’t), but most consumers will be storing personal (non-confidential) material on there anyway.

Arguably, we’re only at day 1 of the real competition. Features (and third-party clients) will be added, the free storage amounts will (inevitably) increase over time, and different business models and audience segments will emerge — for example, services for SMB customers are already available.

The key question, though, is whether these firms can make a business out of this. In the short term, certainly — as long as they’re offering something that isn’t free elsewhere (remember those “premium” web email services that offered more storage before those limits pretty much disappeared — thanks, Google) or that has better functionality/is easier to use than the competition (Dropbox still scores well here). The problem for the pure-play offerings is that when storage becomes just another feature of Microsoft’s, Google’s, Amazon’s, or Apple’s online offerings — most of which are free or wrapped up in one easy subscription — the justification for paying separately for the service disappears.

This is where the dreadful “stickiness” term comes into play: Dropbox, ADrive, JustCloud, SugarSync, and hosts of others need to fight to make their service so attractive (or difficult to give up) that continuing to pay a reasonable fee seems the best option. But this is tricky; they can’t offer more and more storage, and erecting barriers to prevent consumers moving their files elsewhere defeats the whole object of the service. In fact, as the once-superior Dropbox client shows, any advantage is likely to be short-lived. One possible key to survival is making the storage useful to the user’s social circle, not just the user. I’m less likely to move my thrilling 4-hour video of the kids’ last birthday party if it means I have to bring the grandparents up to speed on how to register for and access a new online storage solution. Dropbox is introducing direct links to customers’ shared files, which is a nice step in this direction.* Its referral program’s offering of extra storage for each person you get to sign up has also swelled its customer ranks nicely to 50 million people – that’s a lot of people, and unlikely to decline too rapidly.

However, I’m not convinced that the best route for Dropbox and its ilk beyond the next 12 months isn’t to get bought by the likes of a Google or Microsoft looking to grow their own user base. An alternative, for the more ambitious pure plays, would be to partner into an emerging ecosystem to fight the established players; combine online storage with a social network, Twitter client, location service, and mobile data plans, and suddenly you are looking at a compelling bundle. Unfortunately, most of these other apps are free to use and already have privacy concerns, so online storage of personal files may not fit well with this. Google will have to face this challenge itself.

* (In fact, Dropbox’s official blog pretty much uses a [less cynical] word-for-word version of the previous example, which I’ve only just looked at, honest!)

What It Means: The Failure Of Game Retail For Publishers And Platform Owners

As discussed in previous posts, game retailers have to radically change their strategy if they are to survive on the high street, but what does this major shift in consumer buying habits and, potentially, retailers’ strategy mean for the titans of the videogame world: publishers and platform holders?

The good news:

  • More direct digital sales. A decrease in the physical availability of the product is bound to spur the (already growing) trend in digital downloads — particularly for more obscure titles or add-ons that are unlikely to be stocked/discounted by non-dedicated game retailers. The boom in indie PC games is a clear example of this already happening; boxed PC games have been a highly fragmented market prone to piracy for years, and systems like Steam have enabled otherwise unlikely titles to make it big via secure digital distribution.
  • The long-term decline of the secondhand market. As previously discussed, publishers have long considered secondhand games a thorn in their side, diverting sales from new titles — or so the theory goes. While an online secondhand market will continue to grow, the disappearance of high-street stores with lots of available secondhand titles (often shelved next to the same title, new) reduces impulse-buying opportunities.
  • A smoother supply chain. Obviously, digital sales don’t require holding inventory; in addition, much of the complexity of distribution, credit facilities, and returns will disappear if physical boxed games end up being distributed mostly via two or three massive online stores and major chains/supermarkets. However, there are significant downsides to dealing with only a few firms like WalMart, Tesco, or Amazon — see below.
  • Direct engagement with customers (or at least better information via partners). What do you, as a publisher, know about your end customer — or how many units were bought in a particular state? Perhaps a buyer is tied into your loyalty program or online service — but that doesn’t tell you where they bought from. By simplifying the supply chain and even selling digital goods directly, you gain insight into the buying behaviour of your customers and should be able to respond more quickly and effectively to their needs. Whether the big retailers like Amazon will share this information (even for a fee) is trickier; it depends whether they view the data as a revenue opportunity or a strategic advantage.

The bad news:

  • Supermarkets and multi-category retailers become the primary physical retail outlets. You may have simplified your supply chain, but when Wal-Mart becomes responsible for 50% of your title sales, you become overly reliant on its largesse. And firms like Wal-Mart and Tesco negotiate hard for discounts. A secondary consideration is that, like books, videogames will become a loss leader for multi-category stores: pull punters in with $10 off Mass Effect 3 and then sell them $200 of groceries. As a publisher, you still get your revenue, but this exerts downward pressure on price points and devalues games.
  • Online retail is still a mixed blessing. The gold rush in online shopping is largely over for most categories, including videogames. A few, well-behaved retailers dominate in multiple geographic markets; they don’t tend to discount massively and do now take part in pre-order and limited-edition promotions. But their long-term strategy isn’t necessarily obvious. Could Amazon become a leading competitive digital game distribution service? Will eCommerce (and rent-by-post) players jump into the gap left by high-street stores for secondhand games? The answer to both of these questions is probably ‘yes’.
  • A short-term spike in the secondhand market. A key strategy (as I see it) for those struggling physical stores is to up their game in secondhand and trade-in games. While long-term publishers and platform holders may be able to cut off the air supply to this market with digital downloads and a reduction in the number of physical game disks/cards, that is going to take some time. Be prepared for struggling chains to keep pushing the boundaries in terms of what they see as their right to exploit this (more) profitable segment.
  • The high-street showcase disappears. Often overlooked — especially by people who see GAME and GameStop stores as somewhat grubby holes (guilty as charged!) — is the showcase that these venues provide for new titles and new game systems — however seemingly badly organized to an outsider. 3D-based systems are the clearest example here: you can’t demonstrate a 3DS on TV or YouTube; you actually have to play with one in-person. Ultimately, this also means that videogames cease to hold a special place in consumers’ minds (just like books and music) — dedicated stores where you can browse and be immersed in your hobby/obsession, rather than just picking up the latest Call of Duty while you do the weekly food shop.

Today’s videogame market is such that both publishers and platform owners will probably benefit most from a slow, graceful decline in high-street videogame stores rather than catastrophic collapses — even if the threat of the latter accelerates plans around disintermediation.

How To Keep Videogame Retail Relevant

Globally, store-based videogame retail is suffering. In addition to the ongoing collapse of GAME Group in the UK (which is half-saved for now — sort of), NPD recently reported a decline of 34% year-on-year for store-bought games in January 2012 in the US, while hardware declined slightly more (38%). Admittedly, January can be a flaky month for retail, but the overall trend for physical software sales is down.

Why? Some of this is down to a struggling economy and cautious consumers, but it’s also a natural side effect of the trends outlined in the previous post:

  • Hardware revenues are declining. No new consoles for some months to come (aside from the PlayStation Vita) means reduced hardware sales; it’s pretty much just accessories and add-ons like Kinect and Move.
  • Digital distribution cuts out retail. Digital distribution means money flows directly to publishers (often via platform owners like Sony, Microsoft, and Valve). Even for shop-bought titles, downloadable content can extend play lifetime and put off the next boxed product sale. NPD also recently worked out that $3.3 billion was spent in the US and Europe on digital downloads in Q4 2011. Physical retailers would have seen virtually none of this (aside from selling gift cards).
  • Non-gamers don’t see game retail as a desirable shopping “experience”. Most worryingly for game retailing, market growth is almost all in social or mobile gaming. Even if there was a physical product, would mainstream consumers go to GameStop for these?

So what can retailers do? They need to change people’s perceptions of why a store is better than an online portal, otherwise they will follow music and book retailers into obscurity. While GAME group has had a stay of execution, it will need to do something different to secure its long-term future. Four foundations spring to mind (along with the current but declining day-to-day business of software sales):

  • Bring secondhand games to the fore (even more). Secondhand game sales are a controversial topic that retailers have typically had to tip-toe around. Publishers get angry at what they see as the lost revenue of a new game sale, but for retailers, a successful secondhand game section makes better margins than new game sales. They have to be well managed to do this though; this means more selective game trading (what you buy, how much you pay), better inventory distribution across multiple stores, and even offloading excess stock via an online portal, partner, or eBay. Making the secondhand section look less like a post-hurricane garage sale would help, too. Of course, publishers may object to this — and come the next generation of consoles, they may pretty much kill this market by withdrawing physical media — but for now, they continue to bring in money in tough economic times.
  • Stop toying with online and go all out. The game retail groups may have online portals and e-commerce facilities, some of which are even quite good, but they aren’t Amazon or Play.com good! Retailers need to bring their unique high-street presence to their online offerings: order and pick up in store, trade in and drop off at store, virtual events, and local store forums should at least level the playing field with the big e-commerce players.
  • Make the stores more relevant. Yes, I’m going to use the dreadful “retail experience” phrase — but I am going to try not to reference Apple Stores (darn — too late!). Physical game stores have limited square footage, a lot of stock, and cater to a young male audience; they are never going to be “minimal”, “airy”, or “smell good” — but they can change some things. Reduce front-of-store stock and countless racks and fulfill from the back room; introduce more demo pods and advice points. More interestingly, think about demo events, “parent evenings” where you explain things like age classifications and downloadable content — and refocus on supporting digital, with stored value cards, memory cards, and capacity upgrade advice. And always remember: customers without credit cards are your friends!
  • Get publishers more involved. Publishers might not like an increased focus on secondhand, but game shops still remain one of the most effective ways of directly reaching the more active component of their customer base. It’s time for them to help out more. Move beyond exclusive downloadable content (which costs publishers virtually nothing) and get publishers to provide previews and showcase material and to create competitions. Ironically, small PC game developers (most of which distribute digitally) may be the best bet here; they can offer show reels, demos, and individual levels that the in-store staff can support. This may also help revise the relatively poor image that store staff have in the eyes of the gaming community.

RIM: Can It Be Saved?

Oh dear! RIM’s latest quarterly earnings make grim reading — down on pretty much all metrics and a $125 million loss for the quarter. Is the company circling the drain or can it survive?

Glass half-full:

  • The firm still has solid revenues with good (albeit declining) margins and is making strides toward its new BlackBerry 10 platform.
  • The Playbook made decent inroads into the tablet space, admittedly after a very shaky start and heavy discounting.
  • RIM is still the recognized expert in enterprise messaging.
  • The developer ecosystem is still relatively healthy.

Glass half-empty:

  • BlackBerry 10 is still 6 to 9 months away. In the meantime, iOS, Android, and maybe even Windows Phone 7 will pull further away and define the market.
  • Giving up on the consumer market (which RIM also announced yesterday) means abandoning the place where most smartphones find their initial success.
  • Hardware manufacturing, once a point of pride, now seems like a rock around the firm’s neck. Its failure to build a desirable high-end smartphone demonstrates this.

What’s the prognosis?

RIM needs to turn the corner — and fast. The financial markets and analysts are already writing it off, and its best enterprise customers will follow suit unless it takes drastic action. Here are three options:

  1. Trim the product line and refocus. This is almost an extension of the announcement that RIM is moving away from the consumer space; get 3 excellent devices into the market soon with the new platform and make sure they are the best BlackBerrys ever.
  2. Do a “reverse Nokia.” If RIM is as confident of the BlackBerry 10 platform as it claims, get out of hardware and move into licensing. Who would be interested in yet another smartphone OS is a different question.
  3. Do an IBM. Quit smartphones and focus on infrastructure and applications. Much of RIM’s business smarts are in encryption, traffic management, third-party application support, and platform security. In the future app marketplace world, this could be the basis of a significant business across smartphone platforms. It also gets the firm out of the smartphone OS business — which some are already calling a three-horse race (clue: RIM is no. 4).

April 27: The New Amazon Kindles Hit Europe, But Without The Fire

As of yesterday morning, Amazon is accepting pre-orders for the Kindle Touch (Wi-Fi or 3G) in the major European markets (the UK, France, Germany, Spain, and Italy) for around the same price as the Kindle Keyboard used to sell for — although it will cost more than the later, non-keyboard non-touch device Kindle (naming conventions are not, apparently, Amazon’s strong point!).

The ‘new’ devices have been available in the US since mid-November 2011, so they’re heading to Europe some 6 months later; this is good for most markets except the UK, where the Kindle Keyboard launched at much the same time as in the US.

The Touch has slightly more memory than some of the older devices, is smaller than the Kindle Keyboard, and — of course — has an infrared touchscreen. It’s a nice device with the same excellent screen and battery life, and it continues to be a proof point for single application devices that really excel. However, these devices are looking increasingly expensive when one can buy an (admittedly not great) 7-inch or 10-inch Android tablet for about the same price. And while device manufacturers like Sony and Kobo aren’t offering high-profile competitive devices in Europe to match the Barnes & Noble Nook in the US, they do offer perfectly competent — or even better, depending on how firmly you buy in to Amazon’s ecosystem — e-Ink readers at a variety of price points.

Perhaps the biggest question for tech-conscious consumers though is this: “Where’s the Kindle Fire?” This was announced at the same time as the Touch in the US and also started shipping there in mid-November. I think that Europeans will have to wait quite a bit longer for the Kindle Fire, and here’s why:

While the Fire is Amazon’s future, it does quite nicely with e-Ink devices. Here’s a question for you: Is Amazon more like Google or Apple? With regards to devices, Amazon’s business model is far more Google-like; while they both make devices (or support partner manufacturers), they are really interested in the content and your connection to it — either selling it to you (Amazon) or selling advertising around it (Google). In contrast, Apple has built a compelling ecosystem, including content offerings, but it is still really about selling you that next device. While Amazon makes money from Kindles (both the Touch and the Fire), the real margins are in what it sells you to put on it. If you use an iPad, Android tablet, or PC to download music, e-books, and video from Amazon, its margins from you are higher than for those people to whom it sold a device as well. Confused? Let me explain with a hypothetical example:

  1. John buys a Kindle Fire for $200 (Amazon margin: 20%) and buys $200 of content (Amazon margin: 35%). Total profit margin from John’s purchases: 27.5%.
  2. David uses his iPad and buys $200 of content from Amazon. Total profit margin from David’s purchase: 35%.

Amazon might have made more dollars in profit from John, but the content is infinitely resellable for no extra effort — devices aren’t.

So why is Amazon in the devices business at all? David’s example above gives a clear indication: he may buy e-books from Amazon now, but he’ll almost certainly buy his music, videos, and apps from iTunes. Amazon doesn’t want to be locked out further down the line, hence the Fire. Will Europe have to wait another 6 months for the Fire? Other than the iPad, there is no compelling device to take its place, so it probably will. But, tech-conscious consumers may choose to hold off on buying a Kindle Touch and wait for the Fire, luckily for Amazon its more mainstream consumers that buy e-ink Kindles.

The supporting infrastructure isn’t in place. I’ve touched on this before; the Kindle Fire draws on Amazon’s back-end cloud infrastructure (as does the new Touch) for storage and web browsing support (Amazon Silk). Rolling this out internationally means a significant investment in data centers and legal clearances. This is easier with the Touch, as you are only looking at e-books — most of which Amazon sold you in the first place. The data centers will come, given Amazon’s cloud investments for its business-to-business offerings, but it will take time. One obvious alternative would be to launch the Fire without these cloud facilities — but this lessens both the utility of the device (limiting storage) and how tied in customers are to Amazon.

We’re still waiting for the Android tablet market to shake out. The Kindle Fire is already one of the best-selling Android tablets, but this market still lacks focus (and decent margins). Court cases, form-factor debates, and telcos (particularly in Europe) that are still smarting from the last subsidy disaster (mainly the Samsung Galaxy 10) mean that Amazon can afford to take its time and get its device (and ecosystem) right. Additionally, Windows tablets (on x86 or ARM) are still at least 6 to 9 months away and will be much more expensive than Amazon’s current or proposed devices.

My best guess is that the UK may see the Kindle Fire in late Q2 or early Q3, as Amazon has traditionally used the UK as a European launch pad; Germany and France may follow by year-end. Because of the data center restrictions, it’s possible that other, smaller markets may never get the device in its present form.

GAME Group: Scenarios For The Future (If It Has One)

I won’t retread the series of events that have led to today’s (dire) situation for GAME Group; instead, let’s look at the three most likely scenarios for how this story ends:

1) Recover and survive. This is looking less likely by the day. GAME is in the classic death spiral: 1) cash flow problems lead to 2) suppliers tightening financial terms, which results in 3) an inability to service customer demand, causing 4) even deeper cash flow problems. Add in the crash in the share price from nearly £3 in 2008 to just 1p, and raising additional funds from banks or the market seems unlikely. Some reports are saying that cash is so short that it’s unlikely GAME can pay the quarterly rents due at the end of March. Disposal of some assets like overseas operations (GameStop is reportedly interested in the Spanish and Portuguese operations) or prime retail locations could help the balance sheet long enough for the group to get back up to profitable trading, but this would be a long, hard slog, with suppliers and creditors continuing to view it suspiciously. Most recently, the FT reports (warning: pay wall) that OpCapita (the group that bought Comet for £2 last year) might offer GAME a financial lifeline – again, improving survival hopes or at least buying the firm extra time to come up with a strategy.

2) Get bought by another group (pre- or post-administration). If the current operations can’t be saved, selling the whole firm (or at least the majority UK arm) to someone else is a real possibility. After all, this is a firm with a healthy revenue stream (£1.6 billion for the financial year ending January 2011) in a vibrant market — although admittedly the high-street element is struggling — and with a wide reach across much of Europe (the second-largest gaming market after North America, according to VGChartz). Again, GameStop is an obvious buyer here but is reportedly only really interested in those Spanish and Portuguese gamers, so the price would have to be a real bargain.

This raises another issue: why would anyone buy GAME Group at the moment when they can wait a month or so and buy a more attractive ‘prepackaged administration’ firm without some of the baggage (i.e., creditors) that they will have to deal with in a straight purchase? The upside of purchasing it as a going concern (whether in or out of administration) is the continuity it allows the business; stores can stay open, staff be retained, and suppliers placated. Inevitably, efficiencies will be needed post-acquisition; stores will close and management jobs will disappear or be merged with the structure of the new corporate overlords, but this is still a much better outcome than the worst-case scenario . . .

3) Close down and be broken up. This is still a possibility, particularly given the difficult retail environment in many European territories. If a buyer can’t be found for the firm as a whole, the administrators’ duty is to get as much money back for creditors and shareholders as possible via any means. Overseas operations or the online store could be sold off as going concerns, but the UK may see piecemeal store-by-store acquisitions via management buyouts or selective site purchases by other retail chains (effectively what happened to Woolworths Group in the UK back in 2008/2009).

This last scenario is bad for the economy, bad for employment, and very bad for the game retail ecosystem. Without GAME in the UK, supermarkets become the dominant providers of in-person game sales, which means less title choice, fewer demonstration areas, and the loss of much of the secondhand gaming market. Sure, HMV (while it still trades) offers a good selection of games and more knowledgeable staff, but other than that, gaming retail risks being thrown back 20 to 30 years to a time when (usually excellent) independent shops struggled to make ends meet — but now with the added competition of online retail and digital distribution.

Scenario No. 2 is still the most likely outcome at this stage. This would mean that GAME survives (in some form) for years to come — with the question of whether it can adapt in the long term. Can videogame retail buck the depressing trend set by book, music, and DVD retail and stay relevant? I’ll look at that in my next post.

How The Videogame Market Has Changed Over The Past Decade

In the 12 years that I’ve been following the global videogame market, it has gone from a sizable niche industry of $14.7 billion in 2000 to generating between $64 billion and $74 billion, depending on who you ask (after all, what’s $10 billion between friends?). The industry’s focus has also altered in this time:

Gaming hardware has shifted from PCs to consoles to ‘devices.’ Even 12 years ago, consoles like the brand-new PlayStation 2 had started to refine what ‘gaming’ meant, with sophisticated hardware — including one of the best DVD players at the time — and blockbuster titles. Much of the past decade has belonged to the console world. But in the past 1 to 2 years, we’ve seen the rise of non-dedicated gaming devices (such as Android smartphones and iPads) and the creation of mainstream ‘social gaming.’

The console’s ‘5-year rule’ has ceased to be relevant. Back in the day, new iterations of popular consoles came along every 5 years — making use of more powerful hardware and new storage technologies. Sony was the first firm to flex (if not break) this rule in the modern console age; although it continued to release new consoles, it also kept selling its old platforms for years. However, the current generation of consoles has well and truly broken the old rule. Neither the PS3 nor the Xbox 360 seems likely to be superseded any time soon, despite having been available since 2006 and 2005, respectively.

Connectivity has moved from being the exception to the rule. It’s sometimes difficult to remember what the gaming world was like in 2000. The PlayStation didn’t have any connectivity out of the box; you got an analogue modem with the Dreamcast; and the Ethernet-equipped Xbox was still at least a year away in most territories. Everything you did on consoles (and even on many PCs), you did with physical media — no friends lists, no DLC, no patches.

Digital distribution has made PC gaming interesting again. Given all this, has the PC become redundant as a gaming platform? Far from it! Two (diametrically opposed) trends have kept it interesting:

  1. The rise and rise of social games. As Zynga has shown, there is a massive appetite for social games either within social networks or via dedicated online portals. While mobile devices will, inevitably, become the dominant platform for these games, the PC currently still dominates as the device on which to play Farmville, Triple Town, or Bejeweled Blitz — aided, no doubt, by Apple’s iOS devices not supporting Flash.
  2. The digital distribution of ‘proper’ games — and the rebirth of indie developers. Led initially by Valve’s Steam service, the distribution of PC games electronically has grown hugely over the past 18 months. Interestingly, this has allowed small developers who stood no chance of monetizing a ‘boxed’ release in retail to do their thing; the Binding of Isaac, Super Meat Boy, and (my personal favorite) Dungeons of Dredmor are great examples here. Add to this great initiatives like the Humble Indie Bundle, and the PC gaming space is more interesting than ever.

Mega-publishers have been created. Firms like EA, Activision Blizzard, Square Enix, and Namco Bandai have grown from a decade of mergers and acquisitions — as some of their awkward names might suggest. This has been driven partly by games becoming multi-year, multimillion-dollar projects that favour larger-scale operations and partly by the need for diversification (as with Blizzard contributing World of Warcraft revenues to Activison).

Many other things have also changed, such as the rise of mega franchises like Call of Duty, battles over the second-hand market, MMORPGs, etc.; several of these warrant a whole post to themselves, but I don’t want to labor the point here. All these changes are putting an obvious strain on some of the traditionally successful firms in the space. Most notably, boxed-game retailing on the high street is in crisis, GAME group in the UK being the starkest example of this — I’ll look at their prospects in my next post.

What Does HDD Manufacturer Consolidation Mean For The Wider Technology Industry?

As of last week, when Western Digital’s proposed deal to buy Viviti Technology (formerly Hitachi GST) was finalized, the world has just three global suppliers of hard disk drives (HDDs). Western Digital, Seagate, and Toshiba control around 50%, 40%, and 10% of the market, respectively. The history of acquisitions in this space is fascinating, as shown by this diagram (courtesy of a great article on Wikipedia):

Admittedly, some fairly stringent conditions have been imposed by regulators on both Seagate and Western Digital for their latest acquisitions — including running separate operations for a number of years. But a market of up to 700 million HDD units per year (if you extrapolate from iSuppli numbers) is boiling down to just three suppliers. While solid state drives (SSDs) are manufactured by a whole range of firms, these will be too expensive for most applications and will have too low a storage capacity to match hard disks for the next 3 to 4 years.

What does the wider consumer technology industry gain and lose from this accelerated market consolidation?

Glass half-full:

  • Stability. All three firms now have a steady volume business, decent balance sheets, and forecasts of healthy market growth for the next few years.
  • Concentrated evolutionary innovation. The three firms will now provide a top-down focus for research into improving capacities, energy usage, and form factor. While there are some differences in capability — Western Digital has not yet developed hybrid drive technology, for instance — all are focusing on a continued supply of better and better conventional HDDs. This isn’t particularly sexy, but it is at least dependable.
  • Consistency. Fewer manufacturers mean fewer variations in elements like controllers, cache, and software drivers, making the specification and manufacturing process somewhat easier.
  • Better ‘partners.’ One additional benefit deriving from the previous three points is that firms in the position that Western Digital, Seagate, and Toshiba find themselves in often start making more of an effort to be proactive partners with their customers — allowing them to better understand client demands and needs to ensure that their competitors don’t get a foot in the door.

Glass half-empty:

  • Reduced supplier competition. Obviously, all three firms will compete massively for market share, particularly Seagate and Western Digital; Toshiba is so far behind that it doesn’t stand a realistic chance of catching the top two. But where is the plucky little manufacturer targeting industry verticals? Who will force the pricing issue?
  • Global supply chain risks. As we saw with the Thailand floods in October 2011, disruption in a manufacturing supply chain concentrated in one country or region can have a knock-on effect, although arguably it wasn’t as much of a disaster as many predicted — helped by generally lackluster PC and technology sales. Will the supply chain consolidate even further in China and Thailand now that just three firms control the purse strings?
  • Suppliers with more power. Consumer technology manufacturers already have to deal with component-supplying titans like Microsoft, Intel, and Samsung; you can now add Western Digital and Seagate to that list.

Overall, these acquisitions show the natural (if somewhat accelerated) consolidation of a costly manufacturing industry that doesn’t necessarily deliver fantastic margins; those are, hopefully, reserved for the final device manufacturer. For the technology industry, the pluses probably outweigh the minuses; even in the worst-case scenario where competition decreases, SSD manufacturers will keep pushing the envelope and keep the HDD titans on their toes.