Cisco has built a formidable business in data plumbing since its creation in 1984. This success with enterprises and the back-end provision of the Internet made Cisco a wealthy company but one with a problem: Where to go when you’ve wired up the whole world?
A major strategy that the firm started about a decade ago was to move closer to consumers (or SMBs) through the acquisition of firms that made consumer premises equipment (Linksys, Scientific Atlanta), consumer devices (Pure Digital Technologies – creators of the Flip camera, KiSS Technology), or services (Pure Networks, makers of Network Magic). Naturally, these firms only represented a fraction of Cisco’s 150+ acquisitions over the years, but they stuck out as firms that weren’t in Cisco’s traditional market areas.
Cisco is also renowned for its ability to embrace, merge, and get good results from firms that it has acquired – so what went wrong with those consumer acquisitions? Why hasn’t the firm built a more recognizable name on the high street, and what does this teach us about today’s consumer technology space. Cisco has:
- No brand strategy. Cisco was never going to spend Apple-level money to build a consumer brand. Linksys has a good name in routers but only among those who understand/care about such things. And while most Cisco acquisitions could be brought in under the gold-plated Cisco business brand, this also was pretty unfamiliar to consumers. Back when Linksys was acquired, this wasn’t as big a deal as it is today, when branding from Apple, Samsung, and even Microsoft is so dominant.
- Razor-thin margins. For a firm that made an excellent business of higher-margin enterprise and infrastructure hardware – with additional revenue from training certification and maintenance contracts – making do with the 5% or less margin that manufacturers of successful consumer technologies get was never going to be easy.
- Increased competition from China. The past 10 years have also seen the emergence of stronger global competitors from China: Huawei and ZTE are the best known. While Cisco may be able to fend off much of the challenge in the enterprise space by playing the quality card or lobbying governments for bans on “security” grounds, stopping cheap home routers and mobile dongles (largely sourced by telcos and cablecos for rebadging and distribution to consumers) is far harder. A firm like ASUSTeK is even competing at the high-end with its excellent “Dark Knight” router.
- Made bets that misjudged the market. Finally, Cisco made a number of strategic bets that simply didn’t pay off. Two spring to mind: 1) building Linksys music streamers and home servers to compete with the likes of Sonos – both markets have proven to be tiny; and 2) getting into dedicated point-and-click imaging devices just as mobile phones became equally competent and user-friendly for shooting YouTube clips.
So what’s next? It seems likely that if the rumors of a Linksys sale do turn out to be true, one of the other consumer networking brands like Belkin, D-Link, or NETGEAR could pick it up. But, ironically, a firm like Huawei or ZTE would benefit the most from the (limited) brand recognition that Linksys offers in the marketplace. The disposal will mean that Cisco retrenches to its heartland of enterprise networking, licking its wounds after an interesting (from an analyst perspective) decade of consumer experimentation.
For other corporate-targeted entities with ambitious consumer goals (we’re looking at you Microsoft!) this is a cautionary tale – being a great technology company that excels at integrating acquisitions isn’t enough to catch a break in the consumer technology world today.
Last week, Synology announced a partnership with innovative cloud storage firm Symform. Finally, someone is combining the peace of mind and worry-free connectivity speed of the network-attached storage (NAS) drive with the convenience of cloud storage. Arguably, this is targeted more at small and medium-size businesses (SMBs) — those big enough to have a security and backup policy but too small (or too cheap) to build an enterprise relationship with a commercial cloud provider like Amazon. However, even with my consumer-focused hat on, I see a lot to like:
- It allows a trickle update of cloud files. As anyone who has wondered what the hell Microsoft’s SkyDrive desktop app is doing as it whirs away for a couple of hours will know that syncing to the cloud is still pretty tedious — especially if your connectivity speed isn’t up to scratch. By adding a central repository of your data on the always-on NAS drive, you bypass this issue while still ensuring your files are available in the cloud.
- It pairs two technologies that lack sufficient mainstream appeal, creating a compelling hybrid. As I’ve said before, NAS technology hasn’t hit the levels of popularity that I expected it to 5-8 years ago; it will ultimately be replaced by cloud storage — but not for many years yet. I’ve also said that in the long term, cloud storage isn’t really an “application” or a ”service”; it’s a facet or feature of other applications or services. Pairing lots of local networked storage with cloud back-up (or even just key directory duplication) means that you are getting the best of both worlds now rather than waiting for all-encompassing, super-reliable online services of the future.
- Symform’s business model doesn’t limit the amount you store in the cloud (unlike its competitors). Effectively, Symform works as a coordinator of available peer-to-peer (P2P) storage: Agree to let Symform use some of your space hard disk space (either on a PC, a NAS drive, or a server), and it will give you half of that amount as cloud storage for free (on top of the initial 10 Gb allowance). Symform promises secure, regulatory-compliant, globally distributed cloud storage for little more than the price of adding a new hard disk to your rack/NAS /PC – and that’s if your storage is nearly full. Of course, you can pay as well . . . but that makes the offering significantly less attractive.
But, there are some bridges yet to cross:
- It still means shelling out at least $500 for the local storage. Cost remains the biggest issue for consumers or small businesses looking at network storage. Why would you pay at least $400 for the most basic 2 Tb Synology NAS set-up (for example, the DS212j plus two Western Digital 2 Tb drives) when you can buy a 3 Tb Seagate external USB3 drive for $135?* Well, there are lots of reasons that a seasoned IT professional would recognise: availability, redundancy, multidevice access, file syncing, and management tools to name a few . . . but none of these resonate with mainstream consumers (oe even the small end of the business world). Let’s not forget that consumers are still failing to manage and back up the gigabytes of unique and irreplaceable content generated by their digital cameras.
- Symform’s business model is both a blessing and a curse. While I commend Symform for coming up with something different from the largely interchangeable offerings of Box, Microsoft, Google, Amazon, and Dropbox, there are still some thorny questions that need answering:
- Can Symform make money if only a small fraction of users pay for storage? Of course, you could argue that the same can be said of Dropbox or Box; at least Symform doesn’t have to invest in building massive storage capacity to support its service.
- Can a P2P solution rival a big honking data center in Texas for reliability and speed? Again, there is a persuasive argument that P2P is more robust and efficient than traditional “client-server” models — just look at BitTorrent technology. As with BitTorrent, redundancy will be the key; its imperative that a user’s files aren’t corrupted if another customer’s storage node drops out of the pool.
- Is it legal? This is an argument that could run and run (and I’m not a lawyer…don’t even play one on TV). Government agencies already frown upon cloud solutions which store files/data outside their home geography. Does distributing tiny fragments of files globally make this better or worse? Similarly, can the US government ask for access to customers’ files as they can from other US cloud providers? Incidentally, it’s a myth that the 2001 Patriot Act makes the US the only country able to do this.
- Given the above, is Symform a long-term bet? Back-up is, by definition, all about peace of mind. You want your data to be secure both now and for the foreseeable future. This makes Symform a risky bet for businesses — although at least switching to a different service is easier these days than replacing actual physical back-up devices.
- The security and confidentiality of cloud storage will continue to be an issue, especially given Symform’s business model. When it comes to cloud storage, IT pros rightly point out that file security and confidentiality can be a real issue; you are effectively transmitting your files (usually unencrypted) to a remote data centre protected by a single password. And you could argue that this issue is compounded by Symform then farming out the virtual data center to other individuals’ NAS drives.
Overall, I hope that Symform succeeds — they are trying something different and in theory offering a valuable free-ish service with little downside. The Synology partnership certainly strengthens its hand, while also making its NAS drives more appealing. There is bound to be a shake-up in the cloud storage market in the next 12 to 18 months; too many firms are offering free or low-cost storage with little differentiation. Symform at least has the advantage of a different infrastructure and business model.
* Of course, we’re not strictly comparing like with like here; the 2 Tb Synology set-up is offering RAID redundancy, and it could be configured as a 4 Tb storage option
(see the previous 3 posts for background; part I, part II, part III)
Is the concept of the digital home redundant now? Have events bypassed “something that never was”? No. While the need for a self-contained system in a consumers home with storage, intelligence and management may have been superseded by high bandwidth / availability broadband and cloud services, the things that I believed consumers would need from digital services, devices, and applications are still true — and in many cases still haven’t been provided by today’s technology.
Future trends that fall under the “digital home” umbrella include:
- The fight for “aggregation hubs.” Streaming services have made an impression, and the seamless delivery of content — ranging from e-books to videogames and device applications — now happens as a matter of course. However, these services are still fragmented; a range of suppliers (Amazon, Steam, iTunes) requires different interfaces and supports different client devices. Global titans like Google, Apple, Amazon, and even Microsoft want to bring all this together via individual or household user accounts that tie together all your legitimate movies, music, applications, and e-books. The successful firm becomes a trusted resource for the consumer — and can corner the market in upselling or advertising to them.
- Network refinement. Wi-Fi still isn’t the networking nirvana that device makers would have you believe; at the very least, it can be complemented with other technologies like NFC, Bluetooth, or 4G, and perhaps even those low-power technologies like Z-Wave and ZigBee will finally come good (although I’m not holding my breath!). But as we reach a point when gigabytes of data could be moving to and from devices in the home on a regular basis, Wi-Fi may hit its capacity limits. Shifting to a powerline-based network or wired backbone may be the only way to keep up with traffic demands.
- Storage and application migration to the cloud. Today’s browser-based applications and social networks already run across multiple devices without ever leaving the cloud, but traditional applications will increasingly do the same — be it Office 365, photo-editing packages, or gaming via OnLive or Gaikai. The advantages of online version control, storage, subscription models, and easy sharing make the locally installed software package look increasingly redundant, while the lack of optical drives in devices like Ultrabooks or tablets makes installation from disk very tricky. Online storage is already going this way as Dropbox, SkyDrive, iCloud, and Google Drive compete for consumer attention.
Even without these specific areas of focus, there is still mileage in the concept of greater inter-operability between devices and services – maybe Microsoft Research is on to something with its HomeOS, but this would take many years to achieve a critical mass.
(carrying on directly from my previous post)
Stuff that didn’t even occur to me:
- Tablets. This isn’t a great shock; forecasts and models are all based on evolutionary change to existing ecosystems and technology. Apple’s iPad success was a revolutionary change that no analyst could have predicted. Interestingly, the iPad is a sort of half-way house between the old PC-based home and the potential digital home, offering an easy-to-use, flexible consumption device that hides all that techie stuff. It helps that in iTunes, Apple has delivered the equivalent of another concept in the report: the third-party media/content aggregator.
- Social networks. Like tablets, the rise and rise of the likes of Facebook, and Twitter has fundamentally changed consumers’ relationship with their home technology. A PC, TV, or mobile phone in the home is now merely a gateway to accessing friends and the wider community rather than a solution in itself. Arguably, this is a far healthier relationship, aside from the desperate need to communicate absolutely everything, obviously.
- The move to web-based services and then back to apps. This is an interesting one to consider; as a consumer technology analyst, I naturally expected digital home experiences to be delivered via installed software — software that was perhaps even installed at the factory for devices like TVs. The growth of Flash, HTML5, Ruby on Rails, etc. meant that many services and experiences were delivered via a browser. This makes sense in retrospect: once a compatible browser is available on a device, a service becomes available with very little (if any) tweaking — much better than having to rewrite for every architecture or operating system. More interesting still is the reversal of this trend as app stores and downloadable apps for phones, tablets, and PCs aim to “monetize” consumer service delivery; you lose some of that web browser compatibility if you code directly for iOS, Android, or Windows, but you gain consumer engagement (and, potentially, direct revenue).
(I’ll finish off this series of posts with a look to the future — is the digital home a redundant concept?)
(carrying on directly from my previous post)
Things that failed or haven’t happened yet:
- Video chat. While Skype and its competitors have done very well on PCs, it’s still not the ubiquitous video chat (via TVs, phones, game consoles, etc.) that I had envisioned and that would get us beyond today’s tech-aware audience and into every home. It will be interesting to see where this goes in the future as Microsoft adds functionality to Skype.
- Centralized storage. I’ve used NAS devices and home servers for nearly a decade, and this may have blinded me to the fact that most consumers still rely on local PC/phone storage for sole copies of their content — with perhaps an external hard disk for back-up if you’re lucky. Conceptually, the idea of a dedicated storage device on the home network is still the sole preserve of techies and content hoarders; arguably, the window of opportunity for folks like Netgear, Synology, and QNAP to engage with a more mainstream audience is closing, as online storage services like Dropbox, SkyDrive, and Google Drive will eventually render local storage redundant. Additionally, the need to generate storage efficiency has decreased as memory costs have plummeted: in 2004, a 250 GBhard disk cost $250 according to this great cost comparison; you can now get 3 TB drives for much less than that if you shop around. This has meant that building several gigabytes of storage into every device (phone, DVR, TV, camera) is easier and more cost effective than having a central store — even if this does lead to massive duplication and version control nightmares.
- Voice control. This idea was thrown into the mix to spice it up, as consumer-based voice control seemed fairly unlikely in 2004. Sure enough, there still aren’t any convincing multi-device voice control technologies in people’s homes, but we’re not far off in terms of the underlying technology — Xbox Kinect and Apple’s Siri are starting to show that this kind of thing can work in a limited capacity.
- Connected appliances. We’re still no nearer to the “Internet-enabled fridge” than we were back in 2004. The downsides of high cost, long replacement cycles, and perceived lack of utility still outweigh the potential upsides — the kitchen sees the most traffic in the house, it’s a good place for a Wi-Fi router, and it offers appliance maintenance benefits. The recent failure of Chumby — with its cute connected display/alarm clock/app store that failed to find a market — demonstrates the risks associated with razor-thin hardware margins. But there is still hope: the excitement around the Nest Learning Thermostat last year and the potential applications of maker-type technology like Raspberry Pi or Arduino in this space means that we may yet see dumb technology replaced over time.
- That “brain” to manage the digital home. As storage has become super cheap and Wi-Fi the near-universal networking standard, the management of more centralized storage and more complex networks hasn’t really been needed. Add in the growth in streaming to individual devices — effectively a point-to-point delivery from the content provider — and the intelligence needed to manage the digital home becomes redundant. The closest we have to this today is Apple’s device and iTunes ecosystem; loading multiple devices, managing streaming, and offering (for the more technically minded) network back-up solutions, it has become a default “brain” for those buying into an Apple-centric home. Again, more intelligence management would allow better back-ups, more seamless content sharing, and fewer “Why won’t video X play on device Y?” frustrations — but it’s difficult to see who would provide this now that so many devices manage their own connectivity and content.
(next up; what was unanticipatable when the digital home concept was first created)
Just over 8 years ago, I wrote a Forrester report titled “A Manifesto For The Digital Home,” outlining what needed to happen from a consumer’s perspective for the true “digital home” to become a reality. (We defined the digital home as a single, unobtrusive network environment where entertainment, communication, and applications could be shared across devices by multiple household members.) A lot has changed in the intervening years, but are we really any closer to that reality now?
From a consumer’s perspective, I hypothesized that four things needed to be in place to make the digital home a mainstream reality: flexibility (of connection, exchange, and ease of use); control (of sharing, data privacy, and what goes where); security (of personal information, bought content, and communications); and mobility (of devices, applications, and content). Of course, all of these needed to be underpinned by affordable technology and desirable content and applications.
For this to work, the digital home needed five key technology elements: a network (or, more likely, multiple seamlessly bridged networks); great interfaces on multiple devices; centralized storage; some form of central management function with the intelligence to manage the network, storage, and access issues; and great content that had been “digital-home-enabled” — i.e., able to be shared, backed up, and transcoded without licensing or technical issues.
Some things I got right:
- Device-agnosticism. More and more stuff will run across a variety of devices. Interestingly, this has been driven by social media and content owners promoting browser-based or streaming solutions rather than (as predicted) standards organizations or by an altruistic streak in the hardware manufacturers — most of those efforts have got bogged down in copy protection or years of certification.
- Streaming content. Referred to somewhat quaintly as “broadband VOD” at the time, the streaming of content has taken off in a big way in major markets, mainly to prevent other distribution methods (legal or otherwise) taking hold. Advances in broadband speeds and compression technologies have exceeded even my optimistic expectations at the time.
- Easy networking. This has happened, sort of. Surprisingly, instead of the vision of a co-operating set of network technologies working together where they are best suited (3G/4G outside the home, Wi-Fi for computing, ZigBee/Z-Wave for appliances, etc.), we’ve ended up with faster Wi-Fi crammed into pretty much all devices with 3G as the “just works but it might be expensive” fallback. This certainly makes the network topology easier, and attaching to secure Wi-Fi routers is much easier today than it was 8 years ago. But I can’t help feeling we’ve missed a trick here; the reason those low-power, short-range solutions existed was to facilitate much broader connectivity without security or configuration issues. In addition, Wi-Fi is still an expensive option (both in terms of power and components), and this has held back the networking of non-traditional devices.
(I’ll continue this series with analysis of stuff that didn’t happen as expected and what has happened that couldn’t be anticipated in my next post)