Can the Consumer Electronics Industry Regain Its Former Glory?

It’s been a rough few years for those firms selling consumer electronics; the lack of new formats and technology advances combined with razor-thin margins and increased competition from non-traditional competitors has taken its toll. Philips has finally thrown in the towel after years of struggle; Sharp is on a knife-edge; Sony has recently stemmed its haemorrhaging of money but is still making a loss; and Cisco has given up on its consumer aspirations.

What has caused this massive decline? Consumers are still buying “technology,” so someone is doing well. Why can’t CE firms compete?

  • “Hi-fi” and audio/visual component are obsolete. For many consumers, A/V components have become redundant. Low-quality MP3 files broadcast through dock speakers or [shudder] mobile phone speakers seem to be good enough for many consumers, particularly Millennials. Headphone manufacturers also do well out of this — Beats, Bose, and Sennheiser dominate — but again aren’t the CE giants.
  • TVs just don’t make money. Most households now have at least one large flat-panel TV and probably still watch predominantly standard definition content on it, so why upgrade? 3D hasn’t been the miracle cure that many manufacturers hoped for, and 4K resolution TV is still several years out from the mainstream. Even worse, when firms sell TVs, they often lose money; the margins are slim, shipping is expensive and retail stock is often discounted / returned.
  • Content has starting flowing across multiple devices. More importantly, new competitors like Apple, Microsoft, and even Amazon have stormed the market with newer, digital, connected devices that often have supplementary business models (iTunes, Xbox games, etc.).
  • It’s the economy, stupid. As chains like Best Buy and the UK’s Comet have discovered, it’s not just that consumer preferences have shifted; people are laser focused on reducing those big-ticket discretionary items. New TVs, appliances, and holidays — for which people often bought new cameras, MP3 players, and portable media like DVD players — top that list, unfortunately.

Can CE firms recover? There are two schools of thought here.

No they can’t:

  • Big-ticket, one-off, branded CE purchases are a thing of the past. Future consumer technology purchases will tie into an existing ecosystem far more than in the past. As such, those firms that own that ecosystem — Apple, Amazon, Google — will increasingly call the shots in terms of hardware.
  • Enthusiasts have moved on to Kickstarter and homebrew hardware. While there is still a market for niche hi-fi — firms like Sonos do quite well — this is a shrinking market consisting of consumers who buy a new turntable twice in their lifetime. More damningly, those hi-fi and electronics buffs who in the past would have supported new ideas from the likes of Philips or Sony increasingly look at interesting kit from Kickstarter projects or even put together their own hardware with homebrew equipment like the Raspberry Pi.
  • Scale and software are beating out quality and brand. As Samsung has demonstrated, you just can’t compete if you don’t have massive scale. “Quality” becomes a somewhat esoteric argument when the vast majority of your digital components are sourced from the same firms (often Samsung!) and assembled in the same Chinese factories as your competitors. Suddenly, the only differentiator ends up being the user interface and software (and OS) — fields where hardware-obsessed CE firms have repeatedly failed to show any aptitude.
  • Flexibility trumps quality. As content travels across ever more devices, arguably at lower quality than ever before — see streaming versus Blu-ray or MP3 versus vinyl/CD — the quality of the individual hardware components matters less. Why spend $2,000 on an HD TV when you end up watching most video content via iTunes on an iPad?

Yes they can:

  • There is still a place for well-designed and well-built devices. Many consumers want devices that will last for years (without the non-replaceable battery failing) or devices that perform single functions excellently rather than many functions adequately — still the case for digital cameras versus cameras built into smartphones. CE firms have excelled at these devices in the past and have only recently become obsessed with competing (unsuccessfully) with Apple.
  • Some firms are still doing OK. If you looked at Samsung’s results, you’d struggle to see the crisis. Similarly, there are some (admittedly weak) signs that Sony has turned the corner. What we are seeing at the moment is the weeding out of the weak and obsolete — firms that didn’t move with the times or (like Philips) have better chances/margins in other categories like medical and personal health devices.
  • Delivering seamless connected experiences without all that IT is still a pipe dream. The “IT” firms — and I’d count Apple, Microsoft, and Amazon among these — that have now encroached on the classic CE firm space talk a good talk about seamless experiences and streaming etc., but many of these components still don’t work as advertised or are prone to network and copy protection issues. Classic consumer electronics, on the other hand, were engineered to work in pretty much any operating environment without the need for a detailed knowledge of how to configure your router’s firewall.
  • CE firms are still best for truly independent content experience aggregation. CE has always been somewhat standoffish about “content” — opposing blank media taxes and generally taking the approach of “consumers can do what they want with our equipment.” Ironically, this makes them the ideal deliverer of content services from multiple sources like Spotify, Netflix, Amazon, iTunes, etc. Too many of the new breed of home technology providers — especially Apple — restrict what services are available on which devices because of commercial competitive reasons. Similarly, broadcasters and content producers have a vested interest (or legal responsibility) to impose barriers (either geographic or monetary) to open content distribution.

My take is that over the next couple of years we will see a state somewhere between these two (deliberate) extremes. More firms will disappear or become mere marque brands for other firms’ products (like Polaroid or Kodak). You will also see strategic withdrawals from toxic categories (TVs, audio components, digital cameras, game consoles); while this would have been unthinkable for the boards of CE firms in the past, doing so today is an acceptance of the new commercial realities.

But you will also see great products coming from CE firms, especially as the global economy picks up and our definition of consumer technology expands to encompass environmental, home infrastructure, and wearable devices. Traditional CE firms that can work alongside the IT and social media giants — that are always going to be better at software engineering and connectivity but will fall down on hardware engineering and ease of use — have the best chance of surviving.

 

Sony’s PlayStation 4 – Initial Thoughts

Eschewing the usual E3 bun fight around who can announce first, Sony took the unusual step of scheduling a NYC briefing yesterday to announce the PS4. The briefing — broadcast live and watched by around half a million people on the official feed that I was tuned into — was the now-familiar mix of roving spotlights, developer testimonials, “simulated” games, and intriguing concept demos. Sony made its next-generation intent pretty clear: this is evolution rather than revolution, with the PS4 still being a powerful game-centric, under-the-TV box with physical controllers and movement-based control. Neither the price nor the geographic release window (beyond “holidays 2013”) was discussed, nor did we get to see what the box will look like. (My guess? Black, oblong, and plastic with some blue LEDs.)

While many of the concrete details were shrouded in meaningless marketing speak*, this initial peek behind the curtain reveals a number of interesting innovations:

  • Cloud and remote play take centre stage. The purchase of Gaikai last year hinted that Sony was seriously looking at game streaming; sure enough, it was David Perry who took the audience through many of the new network features of the PS4. Theoretically, all PS4 games will be available for remote play, and this technology also doubles to manage the backward compatibility, given the shift to x86 hardware. This also allows for full game ‘demos’ via streaming and the ability to start playing a game when its only partially downloaded (essential when you are talking about multi-gigabyte games).
  • A move from box-centric to game-centric experiences across multiple devices. Linked to the remote-play features is an increased focus on supporting play experiences on multiple devices. Sony would obviously like this to be the PSP Vita or Sony-branded tablets and smartphones, but there’s no reason why the device pool couldn’t be larger given the firm’s commitment to Android. Android and iOS support was explicitly mentioned, lets see how the implementations pan out.
  • More “user-friendly” features. The current generation of consoles introduced some complexity and hassle to the gaming experience; suddenly, you couldn’t just flip the off switch in frustration (the game could be in mid-save) and you might have to download game patches and OS updates before you even started playing. This removed a lot of the immediacy from the “console experience” (one of the five core buzzwords!), so the PS4 aims to fix this. Obviously, in a connected world, you never actually get to “switch off” the console, but you will get instant suspend and resume. Actually Sony has confirmed that the box will work when offline – but you’ll lose some functionality.
  • More social features and personalization. The new controller has a “Share” button that allows users to pause games and upload game play (which is constantly buffered to memory). Along with the extension of the PlayStation network as a gaming social network (including the introduction of real names as well as gamer tags), this points to an acknowledgement that today’s gamers demand as much social engagement in their console games as they do when (over)sharing their photos or music taste. Add in extended co-op play and spectating, and the PS4 is well on its way to being a true social gaming experience. Personalization also comes into play here – the console will learn your online, genre and friend preferences and adjust the UI as required.
  • A more PC-like architecture. While the console price point wasn’t announced, speculation is already rife that the price will be much lower than the PS3’s launch price — maybe as low as $400. Aside from acknowledging that gamers now have far more choice in living-room entertainment, including $70 Android consoles, this is facilitated by using more off-the-shelf components like x86 CPUs and a PC-style GPU. There are still expensive components in the box — 8GB of GDDR5 memory isn’t cheap — but we should remember that when the PS3 came out, Blu-ray drives were extremely expensive components.
  • A focus on creation activities. In addition to extended social facilities, it was left to Media Molecule (again!) to add some whimsy and true user-creation opportunities to the proceedings. While still a technical demo, using the Move controller for 3D sculpting looked amazing; if tied in with 3D printing services, this could be huge. And the music-number “Play” using Move controllers showed the direction that tools like Little Big Planet could take.

Sony implicitly glossed over a couple of topics in the 2-hour briefing:

  • Physical media: What’s that? Physical media (i.e., game disks) wasn’t mentioned once in the proceedings, aside from an evolutionary discussion about how far we’ve come. It’s also clear that partnerships with indie developers, Blizzard and Bungie also favour digital distribution. So is the PS4 digital only? Of course not. As Shuhei Yoshida confirmed to Eurogamer, it will have a Blu-ray drive, which will even play second-hand games. A cynic may suggest that this is a deliberate ploy to bait Microsoft, which will probably have to acknowledge the inclusion of Blu-ray in the next Xbox (remember the HD DVD drive!) and may have plans to restrict second-hand game sales.
  • Is this bad news for E3? Why would Sony opt for a mid-February briefing rather than wait three months for a more comprehensive E3 announcement? It does mean that the PS4 gets a solo spotlight for several days and gives Sony better control of the messaging. This increasing trend for standalone, corporate-controlled unveilings (see also Apple, Google, Microsoft) is bad news for those marquee multifirm events like E3. Add to this the online demos, videos, and endless commentary, and waiting for three days in June to get your annual game news splurge suddenly seems increasingly antiquated. Of course, E3 still remains a great place to do business, but we may see a scaling back of the consumer glitz.

The elephant in the sexily lit NYC warehouse

Sony, and undoubtedly Microsoft in the coming weeks, is firmly committed to a powerful new home console — as are traditional publishers and the gaming press — but are these devices still relevant to the mass consumer? Tablet games, social games, and even F2P gaming has transformed the games industry over the past 18 months; most require only moderately powerful devices and small investments in games or micropayments (though these can quickly add up). Why buy a $400 console, $60 game, and $10 DLC to do this — albeit in higher definition and with a deeper, more immersive experience?

Of course, dedicated gamers will adopt the new hardware and seek out new gaming experiences, but the past five years has shown that the big money is with casual console gamers (see the Wii’s success) and the new types of game mentioned above. The question for Sony, Microsoft, and Nintendo is how many of those dedicated gamers there are: more than there were 10 years ago, half as many as five years ago, as many but split between eight rather than three platforms? I don’t pretend to know the answer (yet!), but there is no doubt that this new generation of consoles will face more competition, media scrutiny, and consumer choice than the three previous generations (at least) faced.

*Having core traits / buzzwords of “simple, immediacy, social, integrated, personalized” barely narrows it down; you could say this about pretty much any technology from the past 20 years!

Is 2013 The Year Of Android-Based Game Consoles?

Along with the Bluetooth forks, waterproof mobile phones, and massive TVs, one unexpected announcement for a product that might actually be useful was Nvidia’s out-of-the-blue Project Shield game console. Looking a bit like a Bluetooth controller accessories strapped to the bottom of a 5-inch touchscreen, it has impressive specs and is designed both as an Android gaming device and for streaming your PC games to an attached TV. Pundits have already started to weigh in on whether this will succeed and completely wreck the traditional game console market or just fizzle out; more interestingly, from a trending perspective, it adds to a number of other devices trying to bring Android gaming to console/portable console platforms. Four other notable examples are:

1)      The OUYO console. The OUYO is a very cute 10-c.m. cube. This $99 box was funded through Kickstarter in August 2012, and development kits are already in developers’ hands, with final units hitting the market in March 2013.

2)      The GameStick. Another Kickstarter project that has just hit its funding goal, this Android console takes a form similar to those “Android on a USB stick”-type devices seen here. It also slots away when not in use into its own retro-style controller — and it’s even cheaper than the OUYO at $79. The Kickstarter campaign doesn’t finish until the end of January, and first-run devices are promised in April 2013.

3)      The Archos GamePad. Surfing another trend — game-centric tablets — the Archos GamePad is a 7-inch Android tablet with additional controls for games. Coming in at $169.99 and available pretty much now, its spec is underwhelming and suffers from several of the usual Archos flaws — looks that only a mother could love and poor displays.

4)      The Wikipad. The Wikipad is a 10-inch Android tablet with gaming controls — like a bigger, less ugly Archos GamePad. Although it was due in October 2012, it was “slightly delayed” and still hasn’t seen the light of day. At $499, it is more similar in price to Project Shield or the even more expensive Razor Project Fiona rather than the cheap and cheerful Kickstarter consoles.

Of course, using an open source OS in a game console device isn’t new; the portable Pandora was announced back in 2009 and runs Linux. In those pre-Kickstater days, however, production was hampered by a bare bones “preorder” crowdsourcing model, which has led to ongoing issues with contract manufacturers. Some units (including a spruced up 1 Ghz model) have shipped, but it has been slow progress; mine has been on order since July 2010!

A more recently announced Linux game platform is the still-mysterious Valve Steam Box console, which is bound to attract a lot of attention as more details emerge.

Why Android?

Why is Android suddenly a go-to option for those looking to get into the (massively loss-generating) console hardware business?

  • It is free to use with a ready library of tools and functionality. Small startups don’t have the time, money, or expertise to build an OS from the ground up. Android is there for the taking and has already proven capable of simple smartphone games given adequate hardware.
  • It benefits from the Google Play and multidevice synergy. Similarly. Google Play offers a mature and varied marketplace for apps for Android devices (usually…see the downside below). If you are a game developer, the ability to target the OUYO, GameStick, GamePad, and Project Shield as well as all the non-game-focused Android tablets and phones via one store and one build is a big incentive. You also don’t need to jump through the fiscal and judgemental hoops that Apple, Sony, and Microsoft impose before getting on to their platforms — though this often leads to the Google store feeling more like the Wild West!
  • It’s optimized for ARM architectures. A key consideration for new hardware builders is how cheaply you can source decently performing components. The high volume of ARM CPUs shipping for tablets and phones — along with firms like Nvidia and Qualcomm continually pushing the price/performance envelope —means it is an obvious choice. Once you have ARM chips, what are you going to run on them? Well, it isn’t going to be Windows RT!
  • It has XBMC compatibility out of the box. XBMC is pretty much established as the media player solution across most platforms, particularly open source ones. It gives the user access to a host of media playback options along with network awareness — a nice extra to add to games on Android platforms.

There are downsides, too, of course. We already have a massively fragmented Android phone market; different OS versions, OEM tweaks, and varying hardware specs make development and deployment of game applications much more tricky than for the carefully controlled iOS ecosystem, for example. This could potentially undo all the synergy that being able to sell to multiple device owners can bring.

What does it mean for the wider videogaming market?

  • It puts price/functionality pressure on next-generation consoles. Sony and Microsoft are expected to announce their new home consoles this year, probably at E3 in June. These will doubtless offer more power and additional network capabilities, but what else — and at what price? Traditional console launch prices have been going up since the original PlayStation landed in 1994 for $299. Increased functionality, networking, and hard disk storage have created bloated devices more akin to a PC, with only Nintendo sometimes bucking this trend (see this great analysis by Gamasutra). It’s reasonable to assume that new consoles will be at least $400 to $500. Does that still stack up compared with an $80 Android console?
  • It furthers the cause of the free-to-play (F2P) market. Game developers have learnt from the Apple App Store and Facebook game development that giving your game away and charging for add-ons can be a great strategy when gamers are looking for entertainment for $0.99 or less. However, this can also go staggeringly wrong: see Punch Quest) as an example. Android consoles and Google Play will form a natural console home for F2P and casual games. Will Sony and Microsoft aim to compete in this space? We’ll see.
  • Sony has a stealth “in” here but doesn’t seem to care yet. Interestingly, Sony already has half a foot in this camp. Its PlayStation Mobile Android app supports a range of simple twitch and puzzle games, and it even used to have older original PlayStation games like Crash Bandicoot until Sony inexplicably dropped these last August. While graphically rudimentary, it may still offer better game play than Google Play shovel-ware clones. Better ARM processing power may also mean that PlayStation2 classics could appear on the platform — but only if Sony gets its act together and increases its support for more complex PlayStation mobile games. Surely these aren’t seen as being in competition with the (failing) Vita?

Five Consumer Technology Trends To Watch In 2013

2013 will undoubtedly be an “interesting” year for consumer technology. While devices like tablets and smartphones go from strength to strength, PC makers and traditional CE makers in TV, audio, and cameras continue to struggle to turn a profit . . . and that’s before you factor in high-street retail woes and no end in sight to recession-driven belt-tightening. Aside from ongoing evolutionary trends, what will really break through in 2013?

  1. Kickstarter-funded projects need to deliver. One of the breakout successes of 2012 was Kickstarter. Although started in 2009, it was only in 2012 that high-profile projects started to appear and get funded. The downside of this popularity is that more and more projects from inexperienced business startups are appearing. This has led to some high-profile disasters, such as the Code Hero debacle, and a number of projects (particularly games) being abandoned following funding success. Delays are even more endemic: CNN Money compiled a excellent list of the top 50 projects, and just eight shipped on time. In 2013, some of the larger projects, such as the Pebble Watch or Oculus Rift system, need to reach consumers if Kickstarter is to maintain its reputation; the increasing recognition that giving money doesn’t entitle the donor to much if a project goes pear-shaped doesn’t help here.
  2. Tablets will really take off, with multiscreen becoming a focus. After a slow start at the beginning of 2012, tablets from Apple, Amazon, as well as Google and partners were flying off the shelves by year end; this was less true for RIM and Windows RT devices. Predicting that this category will grow in 2013 is like predicting that gravity will keep working at this stage. But what other trends will the success of tablets lead to? Probably the most important is the rise of multiscreen behavior  particularly when looking at consumer media. Techies have been networking and syncing devices for years, if not decades, but it’s only with the rise of the iPad, Nexus 7, Surface RT, and Amazon Fire HD that normal folks have a device that is seamlessly synced to their media libraries, browsing history, and AV equipment. This allows new behaviors  such as a) two-screen movie/TV viewing (see Microsoft’s Smart Glass); and b) bookmark portability — both for browser bookmarks and eBooks — meaning that you’re always able to pick up where you left off.
  3. Cloud services and streaming will improve. While enterprise cloud investment continues to be the “next big thing” (as per IDC, Gartner et al), consumer cloud services are going from strength to strength. Apple, Google, and Amazon will now store nearly your whole media library in the cloud, facilitating even more of the multiscreen behavior described above; and Dropbox, SkyDrive, and Google Drive make sharing files between devices and friends easier than ever. Streaming services like Spotify, Pandora, and Deezer are gaining momentum as broadband connectivity becomes more reliable and usage caps get higher.
  4. Gaming will reach a crossroads. 2012 was not a good year for videogaming; a number of developers and publishers closed, and retail was hit hard. Aside from the obvious reasons for this (recession spending reductions and home consoles getting long in the tooth), the rise of both tablets and free-to-play (F2P) games has changed the gaming landscape enormously. 2013 will see the release of new consoles from both Microsoft and Sony, but those publishers that fail to adapt to the new realities of the marketplace may well follow THQ into bankruptcy.
  5. Smartphone proliferation will put new functionality in consumers’ hands. As with tablets, it’s pointless to cite a trend of greater smartphone adoption. It’s far more interesting to look at what new functionality will reach critical mass because of the rapid life cycle of phone development, fuelled by rabid consumer demand. 2013 will see:
    1. NFC and mobile payments reach early-stage critical mass. Whether you’re looking at embedded NFC applications or add-on dongle payment services (like Square , Bank of America , or ROAM), the rapid adoption of new smartphones means many consumers will have access to mobile payment technology sooner rather than later; of course, whether they then feel comfortable making payments this way is a more complex question.
    2. Wireless charging. While 2009’s Palm Pre was one of the first consumer smartphones to offer built-in wireless charging, that device sank along with the rest of Palm (thanks HP!). Now, though, devices from Nokia, Google, Motorola, HTC, and Samsung allow for wireless charging; some, like Nokia’s Lumia 920 and Google’s Nexus 4, include the phone end of the technology by default. Even better news is that most of the devices use the Wireless Power Consortium’s Qi standard, so one charging mat should serve for multiple devices.
    3. The return of Bluetooth for non-headset devices. After a huge explosion in the availability and usage of Bluetooth headsets and hands-free speakers back in the mid-noughties, it seemed that Bluetooth had pretty much had its day; Wi-Fi and high-bandwidth communication protocols like WHDI seemed like the way forward. However, newer versions of Bluetooth (v4) offer better data rates and lower power usage. More importantly, as tablets have grown in popularity, a swath of new accessories has emerged: game pads, keyboards, and portable speakers all trade high data rates (or sound quality) for ease of use. 2013 will see even more of these accessories combined with new usage models — just look at a bunch of those popular Kickstarter projects. In many cases, they pair up nicely with wireless charging — see JBL’s Wireless Charger Speaker for the Nokia Lumia.

Bubbling under

These technologies will make an impact in 2013 but won’t reach critical mass:

  1. 3D printing. Consumer 3D printers are a tremendously exciting field for tinkerers and hobbyists; the ability to print small but complex objects in a variety of materials has the potential to revolutionize many aspects of our lives (distribution, repairs, art, etc.). But it’s all just “potential” at the moment: Consumer 3D printers are messy, are difficult to set up, and struggle with certain shapes (depending on the technology used). Expect lots of stories in 2013 about 3D printing (3D print shops, IP theft, etc.) and significant advances in the quality versus cost of devices from Makerbot, Ultimaker, and Fab@home, along with better software tools . . . but don’t expect to see millions of these devices in consumers’ homes. The 3D print bureau service (like Staples or 3Dprintuk) seems more likely to grow in the short term, in the same way that Kinko’s provided printing and duplicating services for consumers before cheap multifunction printers arrived.
  2. Augmented reality and 3D headsets. High-profile announcements from Google and the increasing power of smartphone and tablet platforms have reignited interest in augmented reality. Similarly, Kickstarter Oculus Rift has created buzz around 3D headsets. Both of these technologies will offer more immersive experiences, better UIs, and more natural engagement with technology in the future, but component costs, portability, and limited processing power mean that 2013 will not be the year of “X reality” — be it augmented or virtual.
  3. Streaming video to smart TVs and the death of traditional ‘broadcast’. It seems strange that when we’re talking about multiscreen viewing and cloud services taking of we are still some way off of Smart TVs and TV-based internet video services becoming successful. 2013 will see this space ramp up significantly – with potentially Apple, Sony and Intel getting in to the space (news from CES may offer some insight) – but rights issues, complexity, long replacement cycles and mainstream consumer apathy means it will be some time before traditional sources for TV content (ie pay TV providers) see significant threats emerge. The exception to this rule may be emerging markets where broadcasters don’t yet have sports and movie rights tied up and lack a critical mass of signed-up consumer households – Smart TV video services could make real inroads here – but hardware prices will stymie much of this.

Cisco’s (Rumoured) Disposal Of Linksys Brings An End To The Firm’s Consumer Ambitions

Cisco has built a formidable business in data plumbing since its creation in 1984. This success with enterprises and the back-end provision of the Internet made Cisco a wealthy company but one with a problem: Where to go when you’ve wired up the whole world?

A major strategy that the firm started about a decade ago was to move closer to consumers (or SMBs) through the acquisition of firms that made consumer premises equipment (Linksys, Scientific Atlanta), consumer devices (Pure Digital Technologies – creators of the Flip camera, KiSS Technology), or services (Pure Networks, makers of Network Magic). Naturally, these firms only represented a fraction of Cisco’s 150+ acquisitions over the years, but they stuck out as firms that weren’t in Cisco’s traditional market areas.

Cisco is also renowned for its ability to embrace, merge, and get good results from firms that it has acquired – so what went wrong with those consumer acquisitions? Why hasn’t the firm built a more recognizable name on the high street, and what does this teach us about today’s consumer technology space. Cisco has:

  • No brand strategy. Cisco was never going to spend Apple-level money to build a consumer brand. Linksys has a good name in routers but only among those who understand/care about such things. And while most Cisco acquisitions could be brought in under the gold-plated Cisco business brand, this also was pretty unfamiliar to consumers. Back when Linksys was acquired, this wasn’t as big a deal as it is today, when branding from Apple, Samsung, and even Microsoft is so dominant.
  • Razor-thin margins. For a firm that made an excellent business of higher-margin enterprise and infrastructure hardware – with additional revenue from training certification and maintenance contracts – making do with the 5% or less margin that manufacturers of successful consumer technologies get was never going to be easy.
  • Increased competition from China. The past 10 years have also seen the emergence of stronger global competitors from China: Huawei and ZTE are the best known. While Cisco may be able to fend off much of the challenge in the enterprise space by playing the quality card or lobbying governments for bans on “security” grounds, stopping cheap home routers and mobile dongles (largely sourced by telcos and cablecos for rebadging and distribution to consumers) is far harder. A firm like ASUSTeK is even competing at the high-end with its excellent “Dark Knight” router.
  • Made bets that misjudged the market. Finally, Cisco made a number of strategic bets that simply didn’t pay off. Two spring to mind: 1) building Linksys music streamers and home servers to compete with the likes of Sonos – both markets have proven to be tiny; and 2) getting into dedicated point-and-click imaging devices just as mobile phones became equally competent and user-friendly for shooting YouTube clips.

So what’s next? It seems likely that if the rumors of a Linksys sale do turn out to be true, one of the other consumer networking brands like Belkin, D-Link, or NETGEAR could pick it up. But, ironically, a firm like Huawei or ZTE would benefit the most from the (limited) brand recognition that Linksys offers in the marketplace. The disposal will mean that Cisco retrenches to its heartland of enterprise networking, licking its wounds after an interesting (from an analyst perspective) decade of consumer experimentation.

For other corporate-targeted entities with ambitious consumer goals (we’re looking at you Microsoft!) this is a cautionary tale – being a great technology company that excels at integrating acquisitions isn’t enough to catch a break in the consumer technology world today.

Extending PlayStation Plus To PlayStation Vita Could Give Sony A Much-Needed Holiday Boost

While Sony has been running the PlayStation Plus (PS+) subscription service since June 2010, it has become a lot more interesting during 2012: first, we saw the introduction of the ”instant game collection” for PS3, and this week saw the addition (at no extra cost) of PlayStation Vita games (and online save backup).

 

Effectively, Sony is turning PS+ into a high-value subscription service to exploit an extensive back catalogue of game titles by distributing them electronically at virtually no cost. This has advantages, such as a predictable revenue stream and the generation of usage data. Obviously, the downsides are that revenue from traditional sales of a game title are lost, and the rate of remuneration for third-party publishers including games has to be carefully balanced. It contrasts with Microsoft’s Xbox Live Gold service — which is basically a souped-up version of the free Xbox Live service — Sony has always given more away for free in its network

In what promises to be a tough Q4 for all videogame markets, Sony’s move could significantly boost its hardware sales and PS+ subscriptions. In terms of consumers, it’s clear that this announcement affects three key groups:

  1. Existing PlayStation 3 (PS3) and Vita owners without PS+ subscriptions. This group is probably the least affected; these early adopters probably have most titles included in PS+ for the PS3 and Vita. They may be potentially slightly miffed that so many games for which they paid full price are being given away. They are unlikely to sign up for PS+ in the near term, but they could be a medium-term opportunity if Sony adds newer titles and other benefits.
  2. PS3 PS+ subscribers without a Vita. This is the most obvious target group; effectively, Sony has cut $80 off of the cost of a Vita by bundling two of the best-known first-party titles — Uncharted: Golden Abyss and Gravity Rush. While (unlike Sony) we don’t have subscriber device profiles for PS+ users, I’d guess that 80% or so still haven’t invested in a Vita, though some of these gamers will also have legacy PSP digital downloads in their account. Conservatively, this offer could convert 10% to 20% of these owners into Vita owners in the next three to six months — especially as the third-party AAA Vita titles start to arrive at the same time.
  3. New consumers. The Vita addition makes an all-in PS3/Vita/PS+ bundle much more attractive, but it will still be a significant outlay for those who have resisted investing in the Sony ecosystem for 5+ years. While there may be some upside here, it will be fairly limited.

What else could Sony do? A couple of things may help even more:

  •  Discount coupons for larger-format memory cards. The price of memory cards for the Vita is still one of the most common complaints in user forums. Any mechanism that allows PS+ subscribers to get a 10% to 15% discount voucher for memory cards would help here. Making this a limited-time offer may also spur the buying decision. This could even tap into the first group mentioned above and help them justify the PS+ sign-up; they could save 6GBP on a 32 Gb card and still get the one or two games they haven’t already bought for the Vita. Of course, this opens up a world of hurt in terms of dealing with merchants and making sure that discount codes aren’t reused, but it could potentially boost hardware and PS+ adoption in the short term.
  • Get those physical PSP games onto the Vita. The obvious benefit that would also play to our first group is a version the “disk à digital” program (similar to the UMD passport program launched in Japan when the Vita came out) for UMD-based PSP titles: let new PS+ subscribers pick two or three of their legacy PSP games for conversion to a Vita digital copy. Again, there is probably quite a lot of work here in terms of authentication, validating the title list, etc., but if many of these titles are already available in the store, there is little downside. This could also strengthen the case for buying a Vita for our second group as well, but to be fair, the case is already pretty strong for them.
  • Offer discounted DLC for Assassin’s Creed and Black Ops via PS+ as soon as possible for both the PS3 and Vita. Looking beyond the groups mentioned above, aggressively priced hardware bundles and the new AAA third-party titles should sell some additional Vita hardware to the more casual FPS gamer, even if they’re reviewing badly, such as Black Ops on Vita. Assuming many of those buyers have a PS3 but don’t have PS+, adding discounted DLC to PS+ could push them over the edge to subscribing — particularly as those titles are unlikely to be included in the subscription any time soon.
  • Market the “great value” PlayStation message. There seems to be a definite anti-Sony feeling this holiday season: the Wii U is offering new hardware; Xbox is becoming (seemingly) the other major supported format, particularly at supermarkets / non-dedicated retail; while there’s criticism directed at Sony that the new PS3 design didn’t come with a price cut and the Vita is too expensive. Without getting into negative messaging, the value of PS+ (even with just a PS3) is probably Sony’s trump card. Spinning this with a message of “But wait! There’s more! Free quality Vita games!” could at the very least drive PS+ sign-up and probably get that Vita PS+ message out there.

The 2012 holiday season will be the final push for Xbox 360 and PlayStation 3 before new consoles are released next year. Given how radically the gaming landscape has changed over the past two years, it will be interesting to see how successful these final quarters are. It’s even more critical that the Vita makes an impression after a lacklustre launch and increasing competition from smartphones and tablets.

The Surface RT Is D.O.A. — Few Consumers Will Buy Microsoft’s ARM Tablet

Oh dear: I had such high hopes of Microsoft’s Surface tablets — particularly when those rumors of an extremely aggressive price of $199 started circulating. Even the speculation around a $299 to $399 price point left some hope of success. Now that the pre-order service has gone live, it’s apparent that the price point Microsoft has chosen will restrict its sales to the usual fervent tech buyers and Microsoft staff (although they don’t get one free from the company, which is actually quite a good way to improve unit shipments).

Priced at $499 for a 32 Gb version — plus an additional $100 for arguably its best innovation, the keyboard cover — the Surface RT simply isn’t competitive. Sure, it’s a similar price to an iPad (but probably around twice the price of an iPad Mini) and may be similarly priced to the (unseen) 10-inch Nexus when released (but more than twice the price of the Nexus 7), but this ignores the installed base and apps ecosystem for the Android and iOS devices — and you don’t even get a full Windows experience on this ARM tablet. A cut-down version of Office is nice, and may be worth up to $50 for some consumers, but an Intel-based Acer Iconia W510 can be had for the same money. And arguments about differences in on-board storage make less an less sense as these devices increasing tap into iCloud, SkyDrive etc.

Microsoft is also ignoring the stage of development of the tablet market. We are now seeing third- or fourth-iteration tablets on rival platforms, and firms like Amazon are lowering costs by using differing business models. This isn’t like Xbox, where Microsoft could jump in at the start of a new generation because each generation effectively started from scratch; it’s not even like Internet Explorer, where the firm was late to market but used its sheer critical mass to drive the browser to No. 1.

It’s a depressing illustration of the position that Microsoft finds itself in – keen to be a “devices and services” company but tied to a variety of OEMs that it is desperate not to offend (at least in the short term). It has to price high and build hardware to “inspire” partners, but the trouble is that few are inspired by devices that fail to sell. Ironically, after all the efforts to port Windows to ARM architectures, Microsoft may have been better served by waiting a year or so until x86 tablets had established an ecosystem and then releasing the ARM device with better battery life and a more competitive price (as component costs fall).

As more and more OEMs release details of their Windows 8 touch devices, pricing trends are starting to become apparent: $499 to $649 for an x86 tablet (Acer, Lenovo); $500 to $800 for a touch-enabled laptop (pretty much all the OEMs); and premium pricing for large all-in-ones and innovative form factors (Asus TaiChi, Sony Vaio Duo 11, Dell XPS 12). All in all, this pricing is reasonable and demonstrates where OEMs are focusing: touch-enabling traditional form factors and sticking with x86 architectures. It will be these devices (with perhaps a couple of cheap OEM RT tablet) that businesses start to experiment with and that consumers buy as their “next PC” — not the failed attempt to jump on the ARM bandwagon that the Surface RT represents.

Don’t get me wrong: I think the Surface is a beautifully designed tablet with some excellent engineering and a novel UI — better than most of the existing competition. Doubtless, the couple of hundred consumers who buy them will love them to bits. Unfortunately, this all sounds depressingly familiar; perhaps Microsoft should have called it the Zune HD Surface.

Acer And Lenovo Price Their Windows Tablets: Not A Bad Start, But It Looks Expensive For Consumers

In what is likely to become a trend in the next month, two OEMs announce their Windows tablet pricing pretty much at the same time. Acer has bagged the “first!” title for announcing and getting journalist/blogger hands on its Windows 8 tablets; (I’m assuming, of course, that those leaked ASUS prices were simply a joke/placeholder). Shortly after, Lenovo unveiled the pricing for its raft of tablets. My initial impressions: not bad, some interesting innovation, still a bit pricey for consumers (particularly for an Acer), and it throws Microsoft Surface pricing into even more doubt.

Acer

As of November 9, you will be able to buy a Windows 8 tablet for less than $500. That price will get you the 10.1-inch, 32 Gb Acer Iconia W510, fully $100 cheaper than the equivalent iPad (although the screen resolution is much lower). But, disappointingly, to get a keyboard dock included, you need to get the more expensive 64 Gb version with a $750 sticker price; as yet, there’s no price for buying the keyboard dock separately.

Interestingly, Acer also plans to sell the Iconia W700 for $799 to $999. Wait, what? An 11.6-inch tablet using a “proper” Intel CPU, with a better resolution screen, twice the storage, a dock, and a Bluetooth keyboard will be within $50 of the keyboard version of the W510?

Lenovo

Lenovo seems keen to cover every conceivable base with its Windows tablet offerings:

  1. The ThinkPad Tablet 2 is broadly similar to the Acer Iconia W510 and starts at $649 ($799 with a keyboard dock). The ThinkPad name clearly indicates that this is a business-focused device.
  2. The ThinkPad Twist is another business-focused machine and is really a convertible laptop rather than a true tablet. It starts at $849 — a fairly aggressive price for a true laptop replacement.
  3. Similarly, the IdeaPad Yoga 13 is a convertible 13-inch that starts at $1,099 (yikes!). There is also an 11-inch IdeaPad Yoga 11 that — most interestingly — is a Tegra-based Windows RT device starting at $799.
  4. Finally, the IdeaTab Lynx is the consumer tablet (and a close relative to the Acer Iconia W700). It has an 11.6-inch display with a Clover Trail processor and starts at $599 (plus $150 for the keyboard dock).

What this means:

  • We’re unlikely to see x86-based OEM Windows 8 tablets for less than $500 — or $750 for keyboard versions. While it is trying its utmost to shed its cheap-and-cheerful image, Acer still tends to come in at the low end of the OEM pricing spectrum. Similarly, Lenovo’s consumer PCs tend to emphasize value, while the ThinkPad business range focuses on solid reliability. Here, we can see the two OEMs with probably the lowest prices. Sony, HP, Samsung, and ASUS will almost certainly charge more for their equivalent tablets, and even Dell is likely to be on par at best.
  • The low-end Windows tablets aren’t a great replacement for laptops. We must also remember that the Clover Trail-based tablets are basically rocking an optimized netbook processor. While this should guarantee good battery life and a cooler running temperature, we’ve yet to see how well this tablet configuration performs in real-world, multitasking use; it’s likely to be as good as other tablets, but it certainly isn’t a replacement for a decent laptop. True PC replacements will probably need: 1) Intel Core i3/i5/i7-based CPUs, which will likely be markedly more expensive (I did mention what a bargain the Surface Pro is looking to be!), and 2) systems based on the newly announced AMD Z-60, which will carve out a middle ground between the two Intel platforms.
  • Is the Windows Surface Pro really “only” $800? This pricing also makes Ballmer’s $300 to $800 price range for Microsoft Surface devices look shaky. The x86-based Surface Pro will be at the top of that super-wide range — but has a high-resolution display, 64 Gb or 128 Gb of storage, keyboard / cover and a Core i5 processor instead of the weedy Clover Trail Atom processor that’s in both the Iconia W510 and IdeaTab Lynx (warning: don’t use it on your lap!). That starts to look like a comparative bargain if it really is only $800. With an extended warranty, the Acer Iconia W700 already runs up to $1,049.
  • The $199 Windows Surface RT really was just wishful thinking. ARM-based Windows 8 tablets should be cheaper than x86 tablets, but how much cheaper? $299 to $399 or $399 to $499 seem to be the current bets (without keyboards), but we haven’t seen any offerings as cheap as this yet. Lenovo is obviously hoping that the innovative design of the IdeaPad Yoga 11 will command a massive premium! Let’s face it, a $499 Windows RT tablet isn’t going to fly with consumers, particularly when an x86 is the same price (for a lower spec potentially) — thats even assuming consumers can be dragged away from the Apple aisle. Even $350 to $399 is beyond the “impulse buy” price range that might help them fly off of the shelf. I’m not sure Microsoft will need those midnight openings to satisfy pent-up demand.
  • Obviously BOM (Bill of materials) is restricting how low prices can go. There is already some excellent analysis putting the cost of building a Microsoft Surface RT tablet at $300+, an x86 version would be slightly more. As Android tablet makers discovered, the display & touchscreen elements don’t come cheap – and that was when they didn’t have to pay for an OS! For a new product category without volume supply / manufacturing economies (like Apple has) there is precious little margin to be had if the retail price is to be attractive to consumers.

To be fair, it’s still early days. The Surface Pro isn’t scheduled to be released until early 2013, and Acer or Lenovo could adjust prices down as other OEMs lift the veil on their hardware. The most promising devices so far (that we know the price of) are those PC-replacement convertibles; they match the price of Ultrabooks and offer the best of both worlds, albeit in a more bulky package.

My guess is that by the end of the year, Clover Trail Windows 8 tablets (without keyboard) will be available from $399; “PC-replacement” tablets will be around the $799 mark; and Microsoft will still choose to price the Surface Pro at more than $800 ($899 or $999 seem most likely) to appease its OEM partners.

The Synology And Symform Partnership: The Future Of Consumer And SMB Storage

Last week, Synology announced a partnership with innovative cloud storage firm Symform. Finally, someone is combining the peace of mind and worry-free connectivity speed of the network-attached storage (NAS) drive with the convenience of cloud storage. Arguably, this is targeted more at small and medium-size businesses (SMBs) — those big enough to have a security and backup policy but too small (or too cheap) to build an enterprise relationship with a commercial cloud provider like Amazon. However, even with my consumer-focused hat on, I see a lot to like:

  • It allows a trickle update of cloud files. As anyone who has wondered what the hell Microsoft’s SkyDrive desktop app is doing as it whirs away for a couple of hours will know that syncing to the cloud is still pretty tedious — especially if your connectivity speed isn’t up to scratch. By adding a central repository of your data on the always-on NAS drive, you bypass this issue while still ensuring your files are available in the cloud.
  • It pairs two technologies that lack sufficient mainstream appeal, creating a compelling hybrid. As I’ve said before, NAS technology hasn’t hit the levels of popularity that I expected it to 5-8 years ago; it will ultimately be replaced by cloud storage — but not for many years yet. I’ve also said that in the long term, cloud storage isn’t really an “application” or a ”service”; it’s a facet or feature of other applications or services. Pairing lots of local networked storage with cloud back-up (or even just key directory duplication) means that you are getting the best of both worlds now rather than waiting for all-encompassing, super-reliable online services of the future.
  • Symform’s business model doesn’t limit the amount you store in the cloud (unlike its competitors). Effectively, Symform works as a coordinator of available peer-to-peer (P2P) storage: Agree to let Symform use some of your space hard disk space (either on a PC, a NAS drive, or a server), and it will give you half of that amount as cloud storage for free (on top of the initial 10 Gb allowance). Symform promises secure, regulatory-compliant, globally distributed cloud storage for little more than the price of adding a new hard disk to your rack/NAS /PC – and that’s if your storage is nearly full. Of course, you can pay as well . . . but that makes the offering significantly less attractive.

But, there are some bridges yet to cross:

  • It still means shelling out at least $500 for the local storage. Cost remains the biggest issue for consumers or small businesses looking at network storage. Why would you pay at least $400 for the most basic 2 Tb Synology NAS set-up (for example, the DS212j plus two Western Digital 2 Tb drives) when you can buy a 3 Tb Seagate external USB3 drive for $135?* Well, there are lots of reasons that a seasoned IT professional would recognise: availability, redundancy, multidevice access, file syncing, and management tools to name a few . . . but none of these resonate with mainstream consumers (oe even the small end of the business world). Let’s not forget that consumers are still failing to manage and back up the gigabytes of unique and irreplaceable content generated by their digital cameras.
  • Symform’s business model is both a blessing and a curse. While I commend Symform for coming up with something different from the largely interchangeable offerings of Box, Microsoft, Google, Amazon, and Dropbox, there are still some thorny questions that need answering:
  1. Can Symform make money if only a small fraction of users pay for storage? Of course, you could argue that the same can be said of Dropbox or Box; at least Symform doesn’t have to invest in building massive storage capacity to support its service.
  2. Can a P2P solution rival a big honking data center in Texas for reliability and speed? Again, there is a persuasive argument that P2P is more robust and efficient than traditional “client-server” models — just look at BitTorrent technology. As with BitTorrent, redundancy will be the key; its imperative that a user’s files aren’t corrupted if another customer’s storage node drops out of the pool.
  3. Is it legal? This is an argument that could run and run (and I’m not a lawyer…don’t even play one on TV). Government agencies already frown upon cloud solutions which store files/data outside their home geography. Does distributing tiny fragments of files globally make this better or worse? Similarly, can the US government ask for access to customers’ files as they can from other US cloud providers? Incidentally, it’s a myth that the 2001 Patriot Act makes the US the only country able to do this.
  4. Given the above, is Symform a long-term bet? Back-up is, by definition, all about peace of mind. You want your data to be secure both now and for the foreseeable future. This makes Symform a risky bet for businesses — although at least switching to a different service is easier these days than replacing actual physical back-up devices.
  • The security and confidentiality of cloud storage will continue to be an issue, especially given Symform’s business model. When it comes to cloud storage, IT pros rightly point out that file security and confidentiality can be a real issue; you are effectively transmitting your files (usually unencrypted) to a remote data centre protected by a single password. And you could argue that this issue is compounded by Symform then farming out the virtual data center to other individuals’ NAS drives.

Overall, I hope that Symform succeeds — they are trying something different and in theory offering a valuable free-ish service with little downside. The Synology partnership certainly strengthens its hand, while also making its NAS drives more appealing. There is bound to be a shake-up in the cloud storage market in the next 12 to 18 months; too many firms are offering free or low-cost storage with little differentiation. Symform at least has the advantage of a different infrastructure and business model.

* Of course, we’re not strictly comparing like with like here; the 2 Tb Synology set-up is offering RAID redundancy, and it could be configured as a 4 Tb storage option

Wii U: Can Nintendo Win Through With Its (Probably) Final Home Console?

At various points during the past 24 hours, Nintendo has revealed the release dates and prices of the Wii U console in different geographic markets. Two versions will ship: a basic/White Wii U (Japan: ¥26,250; US: $299; UK/EU: around £200 or €250) and a Premium Black Wii U (Japan: ¥31,500; US: $349; UK/ EU: around £250 or €310). For the extra $50, aside from a more traditional console colour, you get four times more Flash memory (32 GB), charging stands for the tablet controller and console, a bundled NintendoLand mini-game collection, and a three-month pass for the “Nintendo Network Premium” online service. Initially, the games won’t support a second tablet controller (which, when purchased separately, will cost a whooping ¥13,440 in Japan — the only market where they are available separately at launch), but this will come over time. The good news is that almost all the Wii controllers, balance boards, and other random bits of plastic you’ve invested in should work with the new console.

Some initial thoughts:

  • Is the pricing right? Putting aside regional variations — Japan has always paid more for its consoles, and Europe has variable value-added tax — the price of the new console isn’t too bad. Sure, it’s higher than traditional Nintendo launch prices, but this was partly forced on the company by the competition. It’s certainly cheaper than the last-generation launch prices from Microsoft and Sony. A more interesting question is whether consumers are still prepared to pony up $350 for a new console when they have other compelling options like tablets, smartphones, and social gaming in which to invest. Incidentally, Gamesutra has a very nice comparison of historic console launch prices, even adjusting for inflation, here.
  • The tablet controller offers some interesting “second screen” game-play opportunities. Many game publishers already complement console/PC releases with companion iPad games or apps (e.g., Mass Effect 3 Infiltrator and Datapad Apps). Nintendo and Sony have also offered console connectivity for their portable consoles in the past. But this is the first time that the second display can be taken as a given for the entire console-owning base. Naturally, the first opportunity is to use the controller screen in a similar fashion to the lower touchscreen on a Nintendo DS. Over and above that, though, developers are pushing the boundaries with asymmetric multiplayer gaming as a real differentiator — i.e., up to 4 ”players” use traditional controllers to interact with the game on the TV, while another player assumes a “God” (or spectator) role with the tablet controller, looking on and driving the overall experience. This effectively takes role-playing gaming right the way back to the original Gary Gygax Dungeons and Dragons tabletop game. Penny Arcade sees some cynical, but probably true downsides to this!
  • This brings second screen TV entertainment to the rest of us. The Wii U TVii functions look nice, allowing social discussion, deeper program engagement (maybe with advertising?), and a more intuitive program guide integrated with multiple providers . . . all for free out of the box in North America. Not that you couldn’t already do this with an iPad (costing from $399) or with an Xbox 360 and “Project Glass” (provided you have Xbox Live Gold for $60 a year and a compatible tablet costing . . . well who knows!) if you were a geek with money to burn. So it’s a much cheaper solution for what looks like a nice experience. All the major US/Canadian networks are on board, along with Netflix, Amazon, Hulu, etc. It’s not yet clear how much extra effort these content and distribution firms will invest in generating the required metadata, but at least many of the sports stats, trivia and social connections are pretty much already there to be tapped into.
  • Why is this likely to be Nintendo’s last home console? Even though we’ve watched OnLive crash and burn and Gaikai be absorbed by Sony, this doesn’t mean game streaming is dead — the days of dedicated game consoles are still drawing to an inevitable close. Why? For the same reason that dedicated cable boxes or video-streaming boxes like Boxee will disappear. The technology will be incorporated into other devices; it could be the TV, a wirelessly connected tablet, or eventually a proper functional cloud streaming service. Incidently, Microsoft and Sony also have just one more console in them; by the end of that generation (in five years’ time, perhaps), I’d expect expensive, dedicated console hardware to have run its course.
  • It has the field to itself for a year. Speaking of the competition, it’s clear that we won’t see new consoles from Sony and Microsoft for at least a year, maybe longer. Nintendo will have the ”next generation” to itself — although Sony and Microsoft will argue, with some validity, that the Wii U is only really comparable with their current generation. Another factor that may help Nintendo in the closing months of 2012 is the delay of several key titles (such as Bioshock Infinite, Tomb Raider, Alien: Colonial Marines, DmC, etc.) to early 2013, leaving core gamers with extra money to spend; some of that may well head Nintendo’s way.

So, can the Wii U succeed? It’s by no means a slam-dunk for Nintendo. Many dedicated gamers — Nintendo’s old core audience — felt let down by the “casual” games that proliferated on the Wii (ironically, the same games that made the console a mainstream success), along with too many Mario ports (no sign of that changing) and mainstream consumers have long since boxed up their Wiis. Add in the rise of social gaming on PCs and tablets, and the appeal of a dedicated console that doesn’t even play DVDs, let alone Blu-ray discs and with just one (albeit innovative) controller seems tough. But Nintendo needs this to work. Unlike Sony and Microsoft, it doesn’t have a fall-back business model or ”multidevice living-room strategy” from which to recoup its investment. And the additional pressures in the portable gaming space from smartphones and tablets mean that Nintendo really has a battle for survival on its hands.

My take: At this stage, I’m prepared to give Nintendo the benefit of the doubt. The hardware looks good; it has strong support from publishers and TV content/distribution owners in North America; and backward compatibility with Wii titles means that there is an extensive collection of games out there in addition to the launch window titles. By the end of Q1 2013, we’ll have a better idea of whether Nintendo will survive as a home console platform owner or follow Atari and Sega down the software-only route.