Quantcast
Channel: S&P Blog
Viewing all 327 articles
Browse latest View live

The Big 8: Eight transformative technologies to pay attention to in 2018

$
0
0

IHS Markit has identified eight transformative technologies to watch in 2018. Some are new, and some are rapidly evolving—and even teaming up—in ways that are disrupting industries and fueling innovation at lightning speed. Take a look at what we think you need to pay attention to:

1. Artificial Intelligence (AI)

Industry experts are abuzz with innovations in autonomous vehicles, infotainment, natural language processing, and augmented reality. Stay sharp by listening to concerns about data security and the debate over cloud-based vs. device-based implementations.

IHS Markit graphic on rate of AI-based systems in new vehicles from 2015 to 2025

2. Internet of Things (IoT)

The IoT is converging with other transformative technologies in 2018 to create new growth opportunities. Pay particular attention to analytics-driven IoT platforms, computing at the edge, and data exchange brokerages (DEBs), as IoT influences everything from home security systems, to oil and gas exploration, to smart cities, to retail experiences, and beyond.

IHS Markit graphic of IoT devices at the edge between 2018 to 2025

3. Cloud and virtualization

Cloud services will pave the way for technologically immature companies to utilize machine learning (ML) and AI, radically transforming their usage and understanding of data.

IHS Markit graphic of off-premises cloud services market roadmap

4. Connectivity

While it’s important to keep track of 5G’s rapid and meaningful development, it is crucial to pay attention to other network players as well, such as LPWA, private LTE, D2D/mesh, mobile broadband, fiber, and LEO satellite.

IHS Markit graphic of LTE-M and NB-IoT connectivity by 2020

5. Ubiquitous video

Expanded video capabilities will have impact far beyond the video supply chain itself. Look for bigger and better screens, plus continued consumption of video, to alter strategies in multiple markets not directly tied to the video supply chain.

IHS Markit graphic of flexible and foldable displays from 4K to 8K

6. Computer vision

Using technology to read images and analyze data as well as—and as quickly as—the human eye will quite literally transform our world. From facial recognition software to self-driving cars, advanced robotics, and surveillance, this technology will accelerate innovation across multiple industries.

IHS Markit graphic of major issues and buzzwords in computer vision

7. Robots and drones

Talk about robots and drones is nothing new. But 2018 could show us the deeper story: the disruptive potential of of robots and drones to transform long-standing business models in manufacturing and industry. From logistics, to material picking and handling, to navigational autonomy and delivery, robots and drones are poised for a big breakout.

IHS Markit graphic of estimated global market for robots and drones in 2018

8. Blockchain

Blockchain-based services beyond financial services are already being deployed and will continue to grow in 2018. Yet the commercial potential is still unclear, and many challenges remain. Mainstream adoption is dependent upon continued experimentation and business model evolution.

IHS Markit graphic of Blockchain adoption

For a complete analysis of the eight transformative technologies of 2018, download the white paper. You can also download the full PDF of this blog posting.

IHS Technology Group at IHS Markit
Posted 5 January 2018


TMT Blog: Top transformative technology trends of 2018

$
0
0

Note: For a complete analysis of the transformative technologies of 2018, download the white paper.

IHS Markit has identified eight transformative technologies to watch in 2018. Some are new, and some are rapidly evolving—and even teaming up—in ways that are disrupting industries and fueling innovation at lightning speed. Take a look at what we think you need to pay attention to:

1. Artificial Intelligence (AI)

Industry experts are abuzz with innovations in autonomous vehicles, infotainment, natural language processing, and augmented reality. Stay sharp by listening to concerns about data security and the debate over cloud-based vs. device-based implementations.

IHS Markit graphic on rate of AI-based systems in new vehicles from 2015 to 2025

2. Internet of Things (IoT)

The IoT is converging with other transformative technologies in 2018 to create new growth opportunities. Pay particular attention to analytics-driven IoT platforms, computing at the edge, and data exchange brokerages (DEBs), as IoT influences everything from home security systems, to oil and gas exploration, to smart cities, to retail experiences, and beyond.

IHS Markit graphic of IoT devices at the edge between 2018 to 2025

3. Cloud and virtualization

Cloud services will pave the way for technologically immature companies to utilize machine learning (ML) and AI, radically transforming their usage and understanding of data.

IHS Markit graphic of off-premises cloud services market roadmap

4. Connectivity

While it’s important to keep track of 5G’s rapid and meaningful development, it is crucial to pay attention to other network players as well, such as LPWA, private LTE, D2D/mesh, mobile broadband, fiber, and LEO satellite.

IHS Markit graphic of LTE-M and NB-IoT connectivity by 2020

5. Ubiquitous video

Expanded video capabilities will have impact far beyond the video supply chain itself. Look for bigger and better screens, plus continued consumption of video, to alter strategies in multiple markets not directly tied to the video supply chain.

IHS Markit graphic of flexible and foldable displays from 4K to 8K

6. Computer vision

Using technology to read images and analyze data as well as—and as quickly as—the human eye will quite literally transform our world. From facial recognition software to self-driving cars, advanced robotics, and surveillance, this technology will accelerate innovation across multiple industries.

IHS Markit graphic of major issues and buzzwords in computer vision

7. Robots and drones

Talk about robots and drones is nothing new. But 2018 could show us the deeper story: the disruptive potential of of robots and drones to transform long-standing business models in manufacturing and industry. From logistics, to material picking and handling, to navigational autonomy and delivery, robots and drones are poised for a big breakout.

IHS Markit graphic of estimated global market for robots and drones in 2018

8. Blockchain

Blockchain-based services beyond financial services are already being deployed and will continue to grow in 2018. Yet the commercial potential is still unclear, and many challenges remain. Mainstream adoption is dependent upon continued experimentation and business model evolution.

IHS Markit graphic of Blockchain adoption

The full PDF of this blog posting can be downloaded here.

IHS Technology Group at IHS Markit
Posted 5 January 2018

The state of the industry: What's in store for media and entertainment?

$
0
0

Content proliferation, globalization, and the rise of operational data are among the five titanic forces that will be shaping entertainment in the next five years, according to IHS Markit analysts in a state-of-the-industry presentation during the Media and Technology Conference held 6 June in London.

The entertainment universe will also have to contend with 5G, as well as with how functional benefits will supersede quality among the consumers of media goods and services, the two other big forces in play during this period.

The state-of-the-industry presentation was delivered by Dan Cryan, senior director for media and content at IHS Markit. The subject of the five forces was among the many topics covered during the conference, which focused attention on prospective issues and future developments expected to affect media advertisers and broadcasters across various platforms. IHS Markit analysts and senior media industry executives also examined key trends impacting the space, and how the industry will evolve throughout 2017 and beyond.

For each of the five forces identified, IHS Markit tackled a successive array of important points designed to illuminate specific aspects of the rapidly changing media landscape.

  • Content proliferation: More content is being made across film production, TV programming, and non-traditional online-all of which is competing for the attention of consumers and advertising spending. This applies throughout the entire spectrum of entertainment content.
  • Globalization: Competition is becoming more globalized, as the traditionally localized media business now has to compete in an increasingly globalized market and competitors of global scale.
  • Functional benefits trump quality: Consumer-format adoption in music, games, movies, and TV has traditionally been driven more by functional benefits than improvements in quality, and signs indicate that this will continue.
  • Rise of operational data: Data and machine learning are permeating all aspects of the media business, from content commissioning, to recommendation, to churn reduction, fundamentally changing the way advertising is being sold.
  • 5G: High expectations abound, but the "when" and "how" of implementation or performance of the new, much-faster mobile standard will vary a great deal by country-as will the definition of "5G."

The conference included two keynote speeches: "Innovation Scores a Winner," from Jamie Hindhaugh, chief operating officer at BT Sport; and "UX x Technology: The implications of Entertainment Platform Trends for User Experiences," from David Harold, vice president of marketing communications at Imagination Technologies.

The conference also hosted four sessions, each addressing an area of current interest. Those sessions, in chronological order, included streaming and the future of TV; new avenues for TV advertising; the case for ultra-high-definition (UHD) and more compelling viewing; and how opportunities are shaping up in the consumer virtual reality (VR) space.

IHS Markit speakers at the event, in the order of the sessions, included Jonathan Broughton, senior analyst for home entertainment; Daniel Knapp, PhD, senior director in media and advertising; Paul Gray, principal analyst for consumer devices; and Piers Harding-Rolls, director in games. Various industry executives were also on hand to serve as panelists at each session.

All told, the London event drew attendance from nearly 300 people, including C-suite officers, executive directors, and senior-level personnel from 57 companies in a wide range of industries-among them Amazon, Apple, BBC, Deutsche Telekom, Google, Goldman Sachs, Huawei, Microsoft, NTTDOCOMO, Shell, The Walt Disney Company, Twitter, and Vodafone.

For more information, visit our Media & Advertising research service.

IHS Technology Group at IHS Markit
Posted 15 June 2017

Tape and disk storage present viable long-term video surveillance storage options

$
0
0

The video surveillance industry has evolved dramatically over the last 50 years-from whirring VCRs, to digital video recorders (DVRs) and spinning hard disks, and now to network-based video surveillance and associated information technology (IT) infrastructure.

For end users, however, a thorny challenge at present has to do with storage. Thanks to increasingly large amounts of video surveillance data being generated and more requirements to store this data for longer periods of time, many traditional approaches to long-term storage are proving too expensive to cope with the data explosion in video surveillance. One solution for end users that has seen increased market traction of late, especially in some high-camera-count systems needing the most cost-effective, long-term data retention, is to use different types of storage media in a multi-tiered environment. This can be an advantageous means to satisfy the demanding storage requirements of video surveillance data, without compromising on the quality of the video being stored or the length of time the footage can be retained.

Three market drivers

Three main drivers account for video surveillance data explosion and greater video surveillance storage requirements. These include the increasing numbers of cameras being used overall; the more advanced specifications of these cameras, offering higher megapixel counts up to and above 4K ultra-high-definition resolution; and the greater value and retention times now placed on data by multiple types of end users, going beyond those in traditional security.

This year, more than 127 million video surveillance cameras will be sold via professional channels.

Video surveillance in 2017

Multiple tiers of storage

Where data storage is concerned, treating all video surveillance data with a one-size-fits-all approach can be inefficient. Most video surveillance footage is never reviewed, and the likelihood of review and the value of the data often diminish over the lifetime of the data. Yet end users often still require storage of video surveillance footage at high quality and for extended periods. Some end users, such as police departments or local governments, have found that traditional single-tier storage, like all-disk or disk-cloud hybrids, are not cost-effective for their long-term storage requirements.

An effective way to combat the increased costs associated with greater storage requirements is to think differently about data use. Using a multi-tiered storage approach, different types of storage media, such as disk, file-based tape, and flash can be combined into a single file system. This means the budget and mission goals can be more efficiently balanced, as performance can be flexed based on the likelihood of data use. Cloud can also be incorporated into this approach.

In many scenarios the speed of data recall offered by a tape library is not sufficient, so it makes sense to forgo tape. Yet, for some users able to trade off immediate for near-immediate access speeds to their oldest data, tape can bring the high-capacity, long-term retention with no compromise on the quality at which video surveillance footage is stored or the length of time that the data can be kept.

In a multi-tiered storage system, disk-based storage is still used as the high-performance primary tier. It is often designed to account for 25-30% of the initial total capacity; but once configured, further disk storage may be required only if additional bandwidth or camera streams are added. The remainder of the storage capacity can be built out in different storage tiers, such as on file-based tape.

In those scenarios where the only parameter that requires increasing is the data retention time, building out further capacity on tape can become even more economical.

For more info:

Josh Woodhouse is Senior Analyst, Video Surveillance, within the IHS Technology Group at IHS Markit
Posted 20 June 2017

In buying Whole Foods, Amazon spreads might-and furthers its threat

$
0
0

Amazon's $13.7 billion acquisition of Whole Foods is, by far, its biggest deal to date-an outsize gesture that manages all at once to achieve the following:

  • Highlight both the breadth and focus of its strategic ambitions
  • Underscore the electronic titan's need for scale to drive its grocery business
  • Show the limitations of online-only retail for certain product categories

Unveiled in an announcement on June 16, Amazon's shift from online to offline via Whole Foods is not unique. Amazon's Chinese counterpart Alibaba pursued a similar strategy with its $629 million deal in 2014 for a 35% stake in China's Intime department store chain. Last year, Alibaba spent a further $2.6 billion to complete its buyout. Similarly, Amazon has been stepping up its own physical retail presence with the rollout of dedicated brick-and-mortar bookstores, as well as testing a cashier-free grocery store called Amazon Go, open for now only to the company's employees.

Catching the fish that slipped through the net

The need for scale is central to Amazon's strategy. In its key segments, Amazon needs to extend its reach as far as possible, and the Seattle-based giant also wants to capture those parts of a customer's shopping basket that it does not currently dominate. The Whole Foods deal not only adds more selections to Amazon's current offerings, it also allows the e-commerce titan to provide a more complete buyer experience, from online to offline and vice versa. Amazon's ultimate goal in this acquisition is to capture the customers and products that have, somehow, slipped through the net.

To be sure, Amazon has attempted to make inroads into the grocery business for some time now but has met with mixed success. Its Amazon Fresh food delivery has not enjoyed the same triumph as the company's traditional online retail and vastly popular Prime services. The Whole Foods deal shows that the grocery business is different. The fast-moving perishable nature of the products being sold demands a different approach, and is one that can be better-run through a combination of local stores and online ordering rather than on the large centralised warehouses relied upon by Amazon's other commercial operations.

Whole Foods could be just the first vertical in the offline business that Amazon ventures into. Footwear, clothing, and electronic devices could all be good options as well for Amazon to acquire in the future. By having both physical and online stores, customers that prefer to see physical products before purchase would then be able to try out the product first and then buy them online afterward.

The Amazon Go initiative is one such example. Its trial cashier- and checkout-free store uses "Just Walk Out Technology" to track items that have been lifted from and returned to the shelves, monitors the virtual cart, and then charges customers accordingly. Like similar efforts in the past, Amazon Go also highlights the firm's behemoth retail ambitions-and its potential to drive disruption further.

Adding more to the customer's shopping basket will also give Amazon more data

If Amazon can tie its existing customer accounts to the people who purchase at Whole Foods, Amazon will be able to support all aspects of its business by utilising the data to improve sales performance and advertising across its website or apps. "Just Walk Out" technology is a potential longer-term extension of this strategy, but the Whole Foods deal will provide Amazon with more immediate short-term gains.

Amazon's competitive threat continues to spread

Amazon is often viewed as a competitor alongside other leading technology players like Apple, Facebook, Microsoft, and Google. While it does provide services in many of the same categories as those tech giants, Amazon's competitive threat, however, spreads much further-which should stoke fear, if not admiration and awe, into many a rival's heart.

For more information on this and related topics, visit our Media & Advertising research service.

Qingzhen (Jessie) Chen is Senior Analyst, Advertising Research, within the IHS Technology Group at IHS Markit

Also contributing to this piece is Jack Kent, Director for Operators & Mobile Media, likewise with the IHS Technology Group at IHS Markit

Posted 23 June 2017

The path to mass adoption for the smart home: incremental changes

$
0
0

Although the smart home has become the latest battleground for juggernauts such as Google, Amazon, and Apple, it is still early days for a young market that is also being pulled in all directions. Given the nascent market landscape-and despite the mind-boggling fragmentation of industry players-the road map in the future for smart homes is becoming clear, with solid growth to continue for the market and a distinct emphasis on machine learning as the way forward.

Global revenue for the smart home market is forecast to reach approximately $18.0 billion in 2018, nearly double the $9.8 billion reached two years earlier in 2016.

However, for this growth to be realized, industry players including manufacturers, installers, and service providers will need to make incremental modifications to strategies surrounding pricing models, ease of use, privacy and security, and machine learning.

Already, pricing models have begun to change at the device level with a wide spectrum of options, especially around smart lighting, now available. Philips Hue had been the de facto smart light bulb provider in the past, but it now is part of an increasingly crowded market segment populated with companies including Lifx, Cree, C by GE and Ikea. And notwithstanding pressure from dozens of other players, it is unlikely that premium brands will change pricing models because companies such as Philips have an entire ecosystem of lighting devices to date, even if it had no recurring-monthly-revenue (RMR) model to support development-hence, the higher device price point.

For professional services, providers have made concessions permitting do-it-yourself (DIY) installation alongside professional monitoring-a significant change from past business models, especially from mainstays like ADT, which now has Canopy (never mind that it has yet to materialize, even though the first announcement was made in 2016).

Meanwhile, other professional service providers offer options for financing hardware. This allows a consumer to eventually own the hardware, effectively lowering the monthly bill after the equipment is paid for-which is another big change for professional security monitoring.

Overall, pricing plays a significant role in the path to mass adoption because most smart home devices are priced out of the range that many consider would qualify as an impulse buy. IHS Markit expects that until pricing reaches this impulse-buy threshold, most large-quantity purchases will be reserved for home construction and home renovation projects; or through professional service providers like Utah-based Vivint, Moni from Dallas, or US-wide provider Comcast, which reduces the market opportunity for device manufacturers that could be obtained through the retail channel.

To be sure, the smart home today is more about point solutions that solve a specific problem-not about installing dozens of devices simultaneously. To get to the mass market, smart home solutions need to be simple to set up, and there must be more clarity about which devices are interoperable. This is where professional service providers excel in terms of delivering a positive smart home experience, while the retail channel has enjoyed less success in comparison. For the smart home market to reach mass adoption, viable DIY options must be present for those not willing to shell out monthly fees or resort to using professional services when all that is needed is interactive alarm monitoring. Although interoperability is the problem confronting DIY, professional service companies-such as Vivint, which is partnering with Best Buy; and MONI, which has a kiosk in a Dallas airport-are good examples of evolving business models in which experts meet with customers to familiarize them with the technology and also to set reasonable expectations.

One important issue in the smart home is privacy protection and security. This is a growing expectation from consumers, especially as more companies look to use data obtained from IoT devices to make a profit. Today these concerns relate mostly to video cameras and voice assistants, which is why many cameras today have physical shields that cover lenses when not in use. Despite these concerns, Amazon recently launched Echo Look, likely to be used in closets-a bold move.

Privacy concerns will soon shift to routers, which will monitor devices on the network for abnormal data traffic. Monitoring software will be able to shut down devices deemed to be acting out of the ordinary, in order to help prevent botnet attacks and malicious activity. Apple has taken a lot of criticism for being slow in the smart home space, but its focus on security is unique and will likely help to win share during the next 12 months, especially as the company launches its rumored smart speaker.

Lastly, machine learning is becoming increasingly important for mass adoption of the smart home. This is because machine learning combines various benefits, such as ease of use, killer use cases and-most importantly-making automation dead simple. Many of the leading service providers today already use basic machine learning to help automatically set scenarios and complete tasks-without the user having to manually create a scenario or even needing to know about how to create one. Smart speakers are even starting to implement alerts-based notifications, building on what the ecosystem knows about the user, in order to drive an action on the user's behalf. The new Nest camera, for instance, boasts the ability to track objects across a field-of-view and to identify friends from foe. Although this is essentially the same as video analytics-which has been around for decades-applying the features saves time and allows consumers greater latitude to do the things they love, rather than endure watching hours of video clips in anticipation of something out of the ordinary to happen.

Clearly, the path to mass adoption of the smart home is to make the "smart" in the home simple and non-disruptive to normal routine. This will include an effective pricing structure, as well as streamlined machine learning for the seamless automation of many household tasks. Furthermore, combining machine learning with
in-home displays or voice assistants will allow for a more enjoyable experience for family or friends to engage the smart home-rather than resorting to a cold, purely mechanical approach relying on individual apps that sit behind passcodes.

For more information on this and related topics, visit our Smart Home Intelligence Service.

Blake Kozak is Principal Analyst, Security Technology, within the IHS Technology Group at IHS Markit
Posted 28 June 2017

Asking why we innovate in healthcare will tell us where things are going

$
0
0

As the smooth functioning of our lives is highly dependent on the complex calculus of our physical and mental well-being, there is a strong interest, understandably, to fathom what technology could do for healthcare.

To be sure, the perennial urge exists within the healthcare industry to introduce a product or service that will have a profound impact on both clinicians and consumers. For clinicians, new regimens can help improve diagnosis, enhance workflows for greater efficiency, and provide more opportunities for appropriate monitoring. For consumers, the introduction into the market of new healthcare solutions translates into medicine that can be personalized to match individual needs and conditions.

But this benevolent impulse also comes down to gaining a competitive advantage: whoever solves healthcare’s problems meaningfully will surely reside in a leading market position for many generations to come.

The problem is the framework in which companies innovate. In healthcare, the innovation process is driven by incentives—an inorganic method of market advancement. What is likely to have commercial success is whatever payers—not consumers, but those with a financial interest or stake—incentivize through payments.

Who determines these incentives in healthcare, and on what basis?

In the United States, a committee under the American Medical Association known as RUC—short for Specialty Society Relative Value Scale Update Committee—provides recommendations to the Centers for Medicare & Medicaid Services (CMS) on how to set payments. For its part, CMS, an agency within the US Department of Health and Human Services, has followed 94% of these recommendations since 1991. The RUC comprises 31 physicians—all of them specialists. Four of the seats rotate on a two-year basis—one of these set aside for primary care, which is definitely underserved inasmuch as primary care makes up the bedrock of US healthcare services. In simpler terms, 31 physicians get to decide how healthcare operates in the United States and not just within the public domain but also in the private sector, given that the vast majority of privately run insurers tie their payment structures to CMS. 

So, the next time you spark a conversation regarding the future of healthcare, at least in the United States, it helps to understand the little-known—but immense—role of the RUC committee. The outsize influence of specialized physicians undoubtedly is one factor that explains why the US healthcare sector is having a difficult time with meaningful change. Without the RUC’s endorsement, whether direct or indirect, a new health-related technology or innovation, no matter how exciting or valuable, will fail to diffuse beyond the innovators and early adopters. 

What implication does this have for innovation?

It is necessary to distinguish between two seemingly similar, but completely divergent, questions. These are: 1) “How will we solve the challenges we face in healthcare?” and 2) “How will technology change the future of healthcare?” These two issues are often perceived as the same, given the emphatic effort to throw technology at every issue, but the realities are entirely different.

For the first question, it is the inability to make informed health decisions by consumers—including those without sufficient financial means or viable options—that figures strongly as to why healthcare in America faces enormous challenges. In contrast, to the question of how technology can change healthcare, the answer depends on what is adopted—in this case, what providers are incentivized to adopt. Why we innovate becomes a question, then, of ensuring that our innovations fit within the context of payments.

In markets without institutional incentives, this is a matter of determining relative value and raising one very basic question: is a new technology worth adopting?

With institutional incentives, this decision process is made for the provider by proxy. As a result, the majority of innovation in healthcare lies within a restricted framework—a counterproductive model for innovation. Outliers to this prevailing system definitely exist, but besides garnering a ton of media attention, commercial success is rarely the case for those who don’t play the game. 

Why we innovate must link up with the fundamental issues of healthcare

Over the past few months, IHS Markit has published a series of ebooks examining the fundamental issues facing healthcare, focusing on clinical care, remote healthcare, and consumer health. These ebooks address how technology can enable both clinicians and consumers to become better at managing health in the future.

Download the ebooks:

IHS Markit healthcare ebooks

The ebooks tell a story of how creative human resources and innovative technical capabilities can come together and be allocated to support meaningful healthcare engagement—the kind that ensures long-term sustainability for all stakeholders, including patients, healthcare providers, and payers. Such a vision includes a serious prevention strategy and involves often ignored—but critical—aspects of population health, such as nutrition, physical activity, mental health, and more. Already widely known and recognized is that the biggest health problems are lifestyle related and preventable. Yet that is exactly the arena in which healthcare providers are least competent. Why, one may ask? The simple but profound answer: because no substantial incentives exist to allow and make room for the practice of meaningful preventive care.

A key factor in the whole equation is who will pay for a more meaningful healthcare engagement. If we were to use the current reimbursement and other payment structures in place today, no one will be found to pay for it because there is no framework at present for such an actor. If no recalibration of incentives occur, the industry will continue to market solutions that do not at their core resolve the fundamental issues at the heart of the problem.

In the long term, it would be detrimental to not adjust incentives, which would imply ignorance toward the current health crisis and ultimately result in creating inequality in healthcare—an utter contrast to the essential idea of reimbursement. And in an anomalous development, business models are emerging that are completely separated from the healthcare sector.

For instance, the primary care provider Forward offers personal health and automation services at a flat rate of $149 per month to its users. As one can imagine, not everyone can afford $149 per month. But if healthcare were delivered appropriately, companies like Forward—which takes pride in not having to deal with the “healthcare sector”—would never even exist.

Thirty years from now, if healthcare practice continues with no changes to the status quo, more practices similar to Forward’s will emerge, further entrenching already stark inequalities. Given the demographic changes and disequilibrium occurring in healthcare, standardization won’t come around or will be difficult to achieve. In other words, there will be a significant difference between the most resourceful clinicians and the least resourceful clinicians, even in high-income nations.

A few examples of technology that could ensure the standardization of healthcare include robotics, implantable devices, digestible technologies, 3-D printing, genomics, and drones. However, these methods must be incentivized appropriately in order to support market adoption. This does not imply an abandonment of the conventional practice of medicine. Neither does it provision or advocate a shutdown of the RUC to replace it with Silicon Valley entrepreneurs.

Still, the RUC—or whatever body it is whose sole purpose is to set financial incentives in healthcare—should include other stakeholders of population health, not just physicians, in examining the thorny question of incentives. Asking why we innovate may tell us what direction we are heading. But in healthcare, how we innovate matters more and makes all the difference.

Roeen Roashan is Senior Analyst for Digital Health within the IHS Technology Group at IHS Markit
Posted 11 July 2017

China AMOLED shipments plunge, but valiant efforts and focus continue

$
0
0

Despite an aggressive push from China to expand manufacturing capacity in AMOLED displays to produce superior screens in devices like smartphones, Chinese makers are shipping much fewer product than expected.

AMOLED-or active-matrix organic light-emitting diode-is recognized as a strong alternative in achieving what its rival LCD technology cannot do, especially for smartphone displays. Here AMOLED offers lower-power consumption, better color saturation, a slimmer structure, and most importantly, flexible and even foldable capabilities.

To date Samsung Display has been the dominant supplier of AMOLED smartphone display panels. But Chinese manufacturers are keen to get in on the action and have been ramping up manufacturing capacity in assertive fashion. Among these makers are Beijing-based BOE; EverDisplay from Shanghai; Visionox Display in Shanghai-adjacent Kunshan; Tianma, Royole, and Chinastar, all three located in the major industrial enclave of Shenzhen in Southwest China; and Truly Display, northeast of Shenzhen.

Recent developments had first indicated progress being made by local Chinese makers with their capital investments in AMOLED. For instance, Tianma had announced on April 20 its first rigid and flexible AMOLED products from the maker's Gen 6 AMOLED fab in the city of Wuhan, in central China. Then, on May 11, BOE announced its first flexible AMOLED products from the manufacturer's Gen 6 AMOLED fab in Chengdu, 700 miles west of Wuhan.

Yet shipments from Chinese AMOLED makers so far have fallen short of their manufacturing capacity, as reported by the IHS Markit Smartphone Display Intelligence Service. This indicates that there's still a long way to go before the Chinese achieve stable yield rates for the AMOLED displays they make. Moreover, a significant gap remains between Chinese panel makers and Samsung Display. For the Chinese, surviving in the competitive market remains a top priority.

Shipments dropped quickly in Q1 2017

Chinese panel makers in the first quarter this year-traditionally the low season-shipped a total of 1.3 million AMOLED displays, down 58.4% from 3.0 million in Q4 2016 when average selling prices, had been high. To be sure, brands had released orders in advance during the final quarter of 2016 in order to control BOM costs and to maintain abundant supply in anticipation of surging sales experienced normally during the traditional Lunar New Year in January. As a result, brands were cautious about releasing more orders in Q1 after the lunar holidays, undertaken ostensibly to control component inventory levels during Q1's low season. Aggravating the situation was the weak demand for smartphones at that time, given that the number of new smartphone models introduced in every first quarter historically has never been high.

Even so, the country's AMOLED display shipments dropped faster than shipments for the overall Chinese display space. Unlike market leader Samsung Display, Chinese panel makers remain weak in AMOLED production yield rates and in their supply chain. And when global demand turned soft as it does every first quarter, Chinese shipments were impacted accordingly. Still, local panel makers attained significant growth in mass-production loading rates, achieving a whopping 403% increase from the year-ago quarter, but purely on the strength of an immature market at the time of the first quarter in 2016.

Smartphones are the major users of AMOLED

The global smartphone market accounts for most of the demand enjoyed by China's AMOLED panel makers. In the first quarter of this year, shipments from China of AMOLED displays for smartphones amounted to 900,000 units. That represented a plunge of 61% from the previous quarter.

To make up at that time for the weak global demand of smartphones, China increased its share of AMOLED shipments to local smartwatch brands. As a result, China's share of the global smartwatch market rose to 29.2% in Q1, up from 23.8% in Q4 2016.

AMOLED shipment result by panel maker

Among Chinese AMOLED makers EverDisplay remained the leader with a commanding 66.8% market share in Q1. In absolute numbers, however, AMOLED shipments from the manufacturer fell 58.4% from the previous quarter, down to 800,000 units. EverDisplay at the end of 2016 had won orders for 5-inch high-definition displays from Chinese telecom giant Huawei and then started mass production in January 2017. But because Huawei products called for much higher requirements, EverDisplay found itself facing quality-related issues that significantly impacted output on the whole.

At the same time that EverDisplay started having issues, Samsung Display won back a portion of demand for the same 5-inch HD AMOLED displays from Huawei. But with the South Korean electronics titan offering more stable quality and comparatively lower panel costs for its AMOLED products, EverDisplay as well as other Chinese AMOLED panel makers became negatively impacted.

At No. 2 was Visionox, with market share rising to 24.1%. Like EverDisplay, Visionox is another panel maker in China that relies on AMOLED. As demand came to nearly an end in Q3 2016 from ZTE, another of China's telecom giants and also a customer, Visionox faced trouble. To maintain a reasonable level of capacity utilization, Visionox continued to foster shipments for local outfits like Chinese smartphone brand Nubia.

For both BOE and Tianma, strong relationships were in place with local smartphone brands that allowed the two manufacturers to enjoy greater LCD exposure than their other rivals. BOE and Tianma ranked No. 3 and No. 4, respectively, but their AMOLED shipments were far smaller-and less important to the makers-than their LCD output. For financial stability, the companies have to focus first on LCD for smartphones while they work on the more time-consuming and challenging process toward getting stable yield rates for AMOLED.

At any rate BOE's AMOLED fab in Ordos, Inner Mongolia, was only a Gen 5.5 rigid AMOLED facility that produced 4,000 units per month, and Ordos was no longer the focus of BOE's strategy. Instead, BOE assigned top priority to the Chengdu Gen 6 flexible AMOLED fab, with the maker already developing panel samples featuring the newly favored 18:9 aspect ratio. Shipments of AMOLED panels from BOE Chengdu is expected to increase quickly as Gen 6 capacity ramps up at the end of 2017.

A similar situation surrounded Tianma. Its Gen 5.5 AMOLED fab capacity in Shanghai focused on the evaporation and encapsulation process, while LTPS array glass shipped from the maker's Xiamen Gen 5.5 LTPS LCD fab. Because Tianma's LTPS LCD demand was strong and AMOLED demand was poor in comparison, Tianma gave LTPS LCD top priority in Q1, shipping fewer AMOLED panels in the process. Nonetheless, the manufacturer's AMOLED shipments are expected to increase quickly when its Wuhan Gen 6 AMOLED fab capacity becomes ready for mass-production output.

Can Chinese AMOLED makers succeed?

There were multiple reasons for Chinese AMOLED panel makers for performing poorly in the first quarter, such as the period in question being the slow season, as well as continuing competition from Samsung Display.

However, the biggest reason was the immature yield rate and insufficient product stability. AMOLED is difficult to produce-a well-known fact-and good engineering know-how along with savvy technical management are the keys to success, accounting for greater importance than the mere possession of good equipment.

As can be seen from the strategies of BOE and Tianma, Chinese AMOLED makers are in the process of rapidly expanding manufacturing capacity, with more focus directed toward flexible AMOLED, skipping rigid AMOLED altogether. But while Chinese panel makers have successfully penetrated the global LCD market-becoming the world's largest LCD suppliers and capacity owners in the world-whether they can repeat this success in the AMOLED sphere remains a question at this point. And since achieving stable AMOLED yields and reliability takes a considerable amount of time, a clear path to ROI, or return on investment, is also still up in the air.

Just the same, China's AMOLED makers appear undaunted, with bold plans to forge ahead and build more than 10 flexible AMOLED fabs in the country in the days ahead.

David Hsieh is Research & Analysis Director within the IHS Technology Group at IHS Markit
Posted 18 July 2017


Grid defection is attractive consumer alternative to reduce energy costs

$
0
0

On the back of rapidly decreasing costs for energy storage and solar photovoltaics (PV), consumers wishing to achieve a low-cost and reliable supply of power are considering grid defection-or at least, partial grid-defection-as an increasingly attractive alternative. But while lower energy costs are certainly welcome news for end customers, the same cannot be said for incumbent utilities, many of which will be facing new challenges that call into question their long-standing business model of selling and distributing electricity.

To be sure, grid defection is a term used largely in North America. Yet it is in Europe where the residential solar and energy storage markets have enjoyed greater penetration over the past few years. For instance, Germany alone already has more than 60,000 residential battery storage systems installed today, compared to less than 10,000 grid-connected residential energy storage systems in the United States.

This article, then, will focus specifically on the economics of grid defection in Europe today and in the future, and will also examine how such a development could impact the energy industry.

Grid defection: a competitive tool for customers

In Europe the economics for going off-grid are improving, but practical considerations make grid defection unlikely to be a major disruptor to existing traditional and entrenched systems of consuming power. Even so, partial grid defection-especially a scenario in which PV is coupled with a battery energy storage system (BESS)-is seen by customers as an increasingly attractive option to hedge against future rises in retail power prices.

As part of the latest report published by the IHS Markit Energy Storage Intelligence Service, we carried out in-depth modelling using hourly demand profiles to compare the levelized cost of energy (LCOE) under four scenarios: future retail rates without any onsite generation; a solar PV system alone; a combination of solar PV and battery energy storage to maximise self-consumption; and a full grid-defection scenario equivalent to 100% self-consumption. The termLCOE, whichcompares the relative cost of energy produced by different energy-generating sources,is often touted as the new metric in evaluating cost performance of PV systems.

The modelling showed that:

  • Partial grid defection using solar PV alone is already competitive with retail rates in the four countries analysed-namely, Germany, Italy, the Netherlands, and the United Kingdom.
  • When analysing the LCOE offered by a combined PV and battery storage solution, IHS Markit shows that this case has already reached parity with retail rates in Germany and Italy, and will do so in the United Kingdom by 2023.
  • Complete disconnection from the grid will not become economically attractive until at least 2025, IHS Markit expects.
  • For battery storage, the so-called sizing sweet spot that is also the most economically attractive lies between 5 and 8 kilowatt-hours in energy capacity for the BESS

Regardless of the economics involved, we do not expect a scenario in which customers are fully disconnecting from the grid or achieving 100% self-sufficiency to become reality in Europe. This is due to both the technical impracticalities stopping such a scenario from taking root, and also because a widely stable electricity grid is already in place that makes complete grid defection unnecessary.

Nonetheless, IHS Markit expects that more than a million residential battery energy storage systems will be installed in Europe by 2025, driven primarily by customers looking to take control of their energy supply and to hedge against future price uncertainty.

New business models offer virtual solution toward 100% independence

As customers look to minimize reliance on the grid as much as possible, they need either active demand-side management or an alternative to consider new business models. A number of players, including the likes of Sonnen, E3DC, or Beegy, have started to act as electricity suppliers, offering customers the capability to achieve so-called virtual independence, through the mechanism described below.

  • In the most common scenario, customers will feed excess solar power into the grid, but the solution provider will meter the electricity and add it to the customer's account. During times when the customer is unable to generate enough solar power, they receive this electricity back from the supplier.
  • Alternatively, some suppliers are offering flat-rate electricity tariffs. Here the customer-with solar and storage installed-pays a fixed fee, which covers all electricity demand that is not self-produced (regardless of consumption levels).
  • Other players offer Virtual Power Plant solutions, which provide additional revenue to customers by aggregating distributed storage systems to participate in ancillary services markets.

The challenges of decentralised generation and self-consumption

Irrespective of the extent to which customers defect from the grid, or whether they choose new virtual self-consumption models, the shift toward the so-called energy prosumer fundamentally challenges the incumbents-electricity suppliers and network operators alike. In fact, the share of self-consumed power as part of total electricity demand is expected to double by 2020.

Still, IHS Markit predicts that utilities will play a major role in the growing energy storage market. Already, 7 out of the 10 biggest electricity suppliers in Europe, such as E.ON and EDF, are already commercialising propositions related to behind-the-meter energy storage.

Personally I believe the trend toward grid defection in order to increase energy independence won't stop. We see strong economic and emotional drivers supporting this shift. At the same time, utilities are clearly starting to understand that the energy landscape is shifting around them, and that they will be able to play a crucial role-not only in commercialising new business models, but also in binding customers through energy storage.

For more information on this and related topics, visit our Energy Storage Intelligence Service.

Julian Jansen is Senior Analyst, Solar, within the IHS Technology Group at IHS Markit
Posted 24 July 2017

TV content and cord-cutting: conflict on the wire

$
0
0

On TV, content is crucial to retaining customers. Without fresh and satisfying high-quality shows, customers will inevitably look to other platforms for their entertainment needs. Traditionally, TV content has adhered to a fairly rigid structure for monetization via carriage fees and broadcaster advertising. Movies also benefit from the pay-TV ecosystem, but often an initial box-office run is enough to ensure the individual profitability of films.

For US pay-TV-video subscribers, many are chafing at the weight of monthly package prices, with average revenue per user (ARPU) hitting a staggering $92.30 in 2016, IHS Markit estimates. At the end of the same year, 1.3 million consumers ended their subscriptions to monthly cable services provided by US operators like Verizon, Comcast, Time Warner, and ATT. These subscribers who "cut the cord," so to speak, are known in the industry as cord-cutters.

In comparison to last year, there were 1.8 million cord-cutters in 2015, when ARPU was also lower, at $89.34.

And cord-cutting is going to continue, given the steady increase in ARPU that can be expected. In turn, this will result in less money available for content creation-unless ARPU increases can drive revenue faster than it could be lowered by a shrinking subscriber base.

US TV households, pay-TV subscribers and penetration

To a certain extent, the increases in pay-TV ARPU will make up for the loss of revenue due to cord-cutting and will be the norm through 2021. Even so, it seems inevitable that ARPU growth alone won't be able to keep pay-TV-video revenue growing indefinitely.

Some may argue that existing content businesses could have another revenue stream, in the form of over-the-top (OTT) video offerings such as Netflix. But can these successor services serve as a direct replacement for the existing pay-TV-video businesses?

The answer is no, as OTT offerings simply don't generate enough revenue to sustain the content creation ecosystem. In 2016 Netflix reported worldwide streaming ARPU at $8.61 per month and revenue of $8.8 billion-far less than what would have been required to support the diversity of content currently found on pay-TV video. In comparison, Comcast's monthly ARPU the same year was $83.19, with the company spending $11.6 billion on programming in order to bring in $22.4 billion in video revenue.

Is the channel business under threat of imminent collapse?

Again, the answer is no. However, what's likely to occur over the next decade is a reduction in the number of channels overall. Today, we're seeing some evidence of what is just the tip of the iceberg, and IHS Markit expects that there will be major changes in the channels being offered by the biggest US channel groups in the next decade.

In February 2017, following the release of its 2016 earnings, Viacom announced it would be refocusing its efforts on six key brands: MTV, BET, Nickelodeon, Nick Jr., Comedy Central, and Spike. The announcement doesn't necessarily mean the closure of channels like TV Land, Bet Jams, Centric, or CMT, but it does mean that the smaller and ailing channels won't receive the same attention and budgeting that they have enjoyed in years past.

Is this just a Viacom problem?

No. Like Viacom, the majority of the biggest US channel groups will also have several cable networks in their portfolio that will be underperforming and are thus ripe for reductions in both programming and marketing spend. The incursion of Netflix and other OTT solutions will have an even more disastrous effect on the long-term viability of independent channels. IHS Markit believes that pay-TV operators will be forced to pay carriage fees to major channel groups before they are able to allocate any payments for independent channels.

So, will Netflix kill the content industry?

Netflix and other OTT pay-TV alternatives won't be the death of traditional pay-TV-video service in the United States. However, these OTT options are contributing to the general malaise of the traditional pay-TV-video space.

Still, as more video is being delivered through the open internet than ever before, creative solutions are also being developed that may allow content owners to neatly sidestep issues involving the reduced number of pay-TV-video subscribers. For instance, monetizing the use of data networks and bundling with third-party mobile and broadband packages have already proved effective in some markets.

Just the same, the prognosis isn't encouraging. While further solutions are likely to present themselves, the forecast loss of another 3.9 million US pay-TV-video subscribers between 2017 and 2021 points to a turbulent near future for the television industry.

Erik Brannon is Principal Research Analyst, Television Media, within the IHS Technology Group at IHS Markit
Posted 31 July 2017

Virtual reality in retail: the case for business drivers

$
0
0

In the abstruse technological arena of virtual reality (VR), there is no shortage of hype or outsize conjecturing. True, the format is now widely accepted by the games industry, where immersive experiences serve to enhance many applications. But how applicable is this success to other industries?

For instance, do home-shopping channels and the ecommerce sector need to be prepared to invest in VR technology? Will companies gain a competitive edge if they have a VR app?

To discuss these topics, I recently had the pleasure of moderating a panel during the Multi-Channel Money Streams (MCMS) Congress, held on the occasion of the Electronic Home Shopping Conference this past June in Venice, Italy. Joining me in the panel were three industry leaders: Richard Burrell, previously at QVC, the world's leading video and ecommerce retailer; Maurits Bruggink from EMOTA, or the European ecommerce and Omni Channel Trade Association; and Alex Kunawicz from Laduma, a Virtual Reality content company. The event offered a unique opportunity to hear from industry experts and thought leaders on topics expected to affect the future of the modern multi-channel industry.

The first panel in the conference focused on the subject of business drivers of virtual reality, as well as on the practical implications of VR for the retail industry.

Based on the findings of a 2016 report that we published on the subject, IHS Markit believes that VR remains a niche technology-not only for now but also for a good number of years moving forward. There are several reasons, but in particular, the added expense of specialized headsets, along with the need for users to possess a technical understanding of the format, means that VR is not yet consumer friendly or ready for the mass market.

There is also the issue of fragmentation and format discoverability, both of which do not lend well to consumer compatibility. For instance, while there are many notable applications already available for virtual reality, most consumers are still not aware of them. And many of the current owners of VR headsets do not know where to find content, or even what platforms are available and compatible with their particular brand of headset.

In order, then, to make VR more successful, investment is needed in creating content, standardising technologies, raising consumer awareness, and educating the consumer, all serving to underscore the benefits of using VR as well as the availability of the technology.

And yet, an important question must be addressed: who should be leading the charge toward a VR future?

The big guns duke it out

The likes of Facebook, Google, and Samsung are heavily investing in VR, but which of them is, in fact, going to push things forward?

In the case of Facebook, it has invested $2 billion on its Oculus headset offering, and such a large investment from a ubiquitous global content platform implies a desire make the technology as popular and widespread as possible-far from what is today's niche user base. To expedite matters, Facebook will be launching live video streaming to its social VR product, called Spaces, which lets users operate avatars that hang out with the avatars of other users in a virtual world. From all appearances, Spaces appears to be not only an innovative application that meshes well with the company's social media roots, it also marks Facebook's intent to create awareness and education of its initiative and make VR more popular in general. The numbers tell a daunting story: while Facebook has currently a global subscriber base of approximately 2 billion, less than 1% of the social media giant's base owns a VR headset.

Current opportunities: but who controls the VR headset market, anyway?

Smartphone VR makes up the vast majority of headsets, counting an installed base in 2016 of almost 16 million units, equivalent to 87% percent of the VR addressable market by device. By 2020, however, this share is forecast to erode to 53% as other platforms find traction. Even so, smartphone VR will rule as the biggest addressable market for VR content.

Among VR players, Google's Daydream will become the dominant smartphone VR platform by 2019, reaching a projected 14 million units by then. IHS Markit believes that Google's next-generation VR platform circulating by that time will eat into Samsung's Gear VR headset, directly impacting the market fortunes of Samsung's offering.

Across the high-end VR headset market, Sony's PlayStation VR is expected during the early part of our forecast to outsell PC-based VR headsets, including Facebook's Oculus Rift and Taiwanese-based HTC's Vive. By 2019, the installed base of PC-based headsets will overtake the game-console-based PlayStation VR, as adoption of Sony's headset slows in tandem with the aging PS4 sales cycle. Overall, however, consumer adoption of VR across all platforms will be slow as buyers take their time in evaluating the technology.

Investing in VR: should you do it?

So, apart from Facebook's "Spaces," is VR applicable to just games and experiences? In fact, there have been some successful applications of VR in other markets-such as the retail industry. Some examples of VR use in retail include the following:

  • Customising of cars as well as bespoke automotive products (Audi)
  • Virtual walk-throughs and remote selling in real estate, hotels bookings
  • Interior design and decor (IKEA)
  • Ecommerce: Chinese ecommerce retail giant Alibaba has a VR retail app called Buy+ that lets consumers shop in Alibaba's virtual reality universe, enabling China's vast consuming public to explore big shopping centres across other parts of the world.

But while VR can be deployed, as shown above, in fresh and innovative ways for some products, this does not mean that the VR experience is universally applicable, especially as the marketing hype of the format or the novelty factor begins to wear thin.

When deciding on whether to invest in VR technology, companies will need to consider two important factors: the big investments required by VR, as well as the less-than-certain revenue prospects or return that can expected from an investment in the new technology. For instance, is direct monetisation through the use of VR a realistic expectation for a company to justify investing in the technology?

For many companies contemplating a VR venture, the hope is to avoid repeating the mistakes made during the introduction of 3-D technology. Just because you can do it doesn't mean you should. In the television sector, many players that had invested in 3-D did so in the belief that the new and exciting technology was the way to go, and that not investing meant being left out or appearing old-fashioned and out of touch. Fast-forward to today, and 3-D live channels launched by companies and operators have mostly closed down. For their part, TV brands have all but abandoned the production of 3-D television sets.

The latest analysis from IHS Markit shows that VR is already performing better that 3-D TV ever did.

For those of us in the panel at the MCMS Congress, we've come to this conclusion following a very lively session: When it comes to investing in VR for retail and ecommerce, it is still too early at this point to get a clear sense of what to expect in terms of a return on investment, and how much companies should invest at this point remains uncertain. But as is the case with all new technologies, it is unquestionably important to remain aware of and be up-to-date with developments taking shape in the VR space, which would then allow for a considered and intelligent assessment of what VR could bring to your business.

For more info:

Maria Rua Aguete is Executive Director for Media, Service Providers & Platforms, within the IHS Technology Group at IHS Markit
Posted 14 August 2017

Global Display Conference to present far-reaching view of display industry

$
0
0

IHS Markit is hosting "Global Display Conference 2017," a two-day forum on important current and emerging issues affecting the worldwide display supply chain.

The event, to be held 19-20 September at the Hyatt Regency San Francisco Airport, brings together industry leaders, analysts, and subject-matter-experts in various planned sessions throughout the conference to offer attendees valuable perspectives unique to the display landscape, including:

  • Analysis and insight on supply chain implications, and moves by Chinese brands to expand their presence worldwide
  • Challenges and solutions in touch technologies
  • OLED display technology evolution and trends for end-market applications, including TVs, smartphones, and computing
  • New technologies and their likelihood for market adoption
  • Automotive technology and its prospects for both near- and long-term

Day One sessions

Kicking off the conference on Day One is David Hsieh, director of research and analysis for displays at IHS Markit, in a keynote session that will include companies from the display component, panel-maker, and semiconductor industries. The keynote is expected to touch on a number of themes, such as major display market and technology trends, industry long-term growth, consumer preferences, and the continuing evolution of display technology. Joining Mr. Hsieh at the keynote session will be speakers from Intel, Corning, Canon, and BOE.

Following the keynote is the OLED & Flexible Displays session, to be led by Jerry Kang, principal research analyst for displays at IHS Markit. Joining Mr. Kang in discussing technology, innovation, and advancement in OLED and flexible displays will be speakers from Cynora, Universal Display Corp., and Kateeva.

The third session on Day One is devoted to examining the state of the global television market, amid a quickly shifting media landscape and facing significant challenges today in achieving continued growth and profits. Paul Gagnon, research director for consumer devices at IHS Markit, will be joined by speakers from Nanosys and LG Electronics.

Day One will end with a collaborative session on touch screens and interactivity, to be led by Calvin Hsieh, assistant director of research and analysis for displays. The session will explore the various approaches utilized by panel manufacturers to make displays interactive alongside their display-based user interfaces, including in-display fingerprint sensing. The companies in this session include FlatFrog, WaveTouch, Synaptics, and Qualcomm.

Day Two sessions

Day Two will feature a clutch of industry experts that closely monitor the entire global display spectrum, covering automotive displays, display manufacturing equipment and materials, digital signage, mobile PCs, and desktop displays.

Mark Boyadjis, principal analyst and manager of automotive user experience at IHS Markit, will lead the session on automotive displays. Mr. Boyadjis will speak on display trends-from novel technologies in display panels to innovative ways with which car occupants can interact with the vehicle. The session will also cover the human-machine interface (HMI) as it applies to automotive, with discussions on technical innovations in HMI, as well as best practices in UX-or interface-design as the industry works to renovate the in-vehicle user experience. The companies participating in this session are Delphi, Rightware, and Ultrahaptics.

The next session on display manufacturing equipment is timely, given that the display industry currently undergoing the most significant changes in manufacturing and display technology since the mass production of flat-panel displays in the 1990s. Charles Annis, senior director of display manufacturing technology at IHS Markit, will examine how evolving display technologies are necessitating new and advanced manufacturing approaches. Joining Mr. Annis are three leading supply chain companies: Applied Materials, Plansee, and Mycronic.

In the sphere of public displays, the sector is poised to benefit on the back of lackluster consumer television sales, with revenue for public displays projected to enjoy a stellar compound annual growth rate of 18.2% from 2017 to 2021. Similarly, the market for fine pixel pitch LED video displays is forecast to scale new heights this year as revenue is estimate to reach $436 million this year, up a heady 82% from 2016. Sanju Khatri, director of digital signage and professional video at IHS Markit, will lead this session on digital signage. Also on hand will be three companies-Elo Touch Solutions, E Ink, and SiliconCore Technology, Inc.-to explore the various applications and technology trends enabling such growth:

For the final session of Day Two, Rhoda Alexander will chair the Mobile PC and Desktop Displays session, exploring how form and function are changing usage patterns in these sectors, and vice versa. To be sure, the personal computer market is increasingly diverse, spanning tablets, notebook PCs, and desktop systems. And crossing home, office, classroom, and mobile, Dolby Laboratories will discuss market shifts and look into both future opportunities and risks for display and system providers alike.

View the detailed agenda covering the entire two-day conference.

Registration particulars

Registration to attend the two-day event is starting now. Early bird registration, offered at a reduced fee of $599, ends 1 September; regular registration is $799. A discount of $99 is available for groups of two or more; all attendees must be registered at the same time for the discount to apply. A 10% discount is also available to previous attendees of the Global Virtual Event, held 16 May.

For more information or for assistance on registration, contact us.

IHS Technology Group at IHS Markit
Posted 28 August 2017

Wi-Fi use in automobiles continues to build and grow

$
0
0

Over the past few years, the use of Wi-Fi technology in consumer automobiles has been steadily growing, with telematics units that offer broadband data leading the way in the wide adoption of in-vehicle Wi-Fi hotspot services, particularly in the United States. And in deploying Wi-Fi technology, infotainment system vendors are beginning to introduce wireless smartphone projection technology that no longer requires USB connection.

Given these developments, IHS Markit projects that the total number of Wi-Fi-enabled automotive devices-infotainment systems and telematics systems combined-shipped in light vehicles will grow from 13.8 million units in 2016 to 47.3 million in 2021, equivalent to a compound average annual growth rate (CAGR) of 28%.

Automotive Wi-Fi-enabled shipmenrts by device type

In-vehicle Wi-Fi hotspot service gaining traction in North America

Although in-vehicle Wi-Fi hotspot was introduced in the United States as early as 2008, a more extensive implementation of broadband Wi-Fi hotspot service in automobiles did not occur until 2014. That was when GM OnStar introduced its 4G LTE Wi-Fi hotspot service to GMC, Chevrolet, Buick, and Cadillac vehicles sold in North America. Fast-forward to the end of the first quarter this year-almost three years after the initial launch-and more than 5 million 4G LTE Wi-Fi-equipped vehicles can be found on the road, OnStar reports.

Earlier this year in March, GM and Jaguar Land Rover started to offer in-vehicle 4G LTE hotspot service with unlimited data for $20 a month through AT&T. Other major vehicle manufacturers including Audi, Ford, and Volvo quickly followed suit, and the reduced pricing is expected to contribute to wider adoption of Wi-Fi connectivity inside automobiles in North America.

Wireless smartphone projection technology using Wi-Fi

Smartphone projection refers to a smartphone integrated with an infotainment system and displaying projected icons on the touch screen. By using these icons, drivers can more safely access the smartphone applications through the head-unit while driving.

The two most popular projection applications currently in the market are Apple CarPlay, offered on the iPhone; and Google Android Auto, offered on Android-based smartphones. The majority of infotainment systems supporting Apple CarPlay and Android Auto currently require a USB cable connection. However, from late 2016, infotainment system vendors-first Harman, and then followed by Alpine Electronics-started to launch infotainment systems offering wireless smartphone projection technology.

Because all drivers still need USB cables to charge smartphones inside automobiles, the benefits from using wireless smartphone projection technology are currently limited. As a result, the technology is not expected to grow quickly in the near future. Even so, as more vehicles use wireless smartphone-charging technology in the next few years, wireless smartphone projection technology is expected to gain broader market acceptance, inasmuch as the combined technologies will allow drivers to eliminate the need to carry USB cables.

Adoption of V2V technology remains to be seen

Vehicle-to-vehicle (V2V) refers to an automobile technology designed to allow cars to communicate wirelessly with each other, so that vehicles in proximity-typically within 300 meters (about 985 feet)-can exchange information to warn of hazardous road conditions and to avoid collisions. In the United States, V2V technology is called WAVE, or Wireless Access in Vehicular Environment, which builds on IEEE 1609, a higher-layer standard that builds on the low-level IEEE 802.11P benchmark.

In December 2016, the US Department of Transportation proposed a rule mandating V2V communication on light vehicles by 2019. After Donald Trump was sworn in as the new US president a month later, however, he signed an executive order requiring that two active regulations be eliminated first before any new regulation could be issued. This signed executive order makes prospects very difficult for the proposed rule to mandate V2V communication to be issued in the near future, leaving industrywide adoption of V2V technology in the United States uncertain at this time.

Wi-Fi is here to stay

At present Wi-Fi hotspot service in automobiles continues to gain wider acceptance in the North American market, particularly in the United States. Meanwhile, wireless smartphone projection technology is expected to be the next promising Wi-Fi use case in automobiles a few years from now. But for V2V WAVE technology, industrywide adoption is yet to be seen.

Overall, however, it is clear that by adopting Wi-Fi technology in automobiles, vendors are able to offer high-performance applications, such as broadband internet and wireless smartphone projection, to drivers and passengers.

And as today's automobile continues its transition into becoming a home away from home for both drivers and passengers in the future, Wi-Fi's position as a key wireless standard in this transformation remains secure.

For more info:

Christian Kim is Senior Analyst for IoT & Connectivity within the IHS Technology Group at IHS Markit
Posted 4 September 2017

World radiology & cardiology IT market remains highly concentrated

$
0
0

Just five companies in 2016 accounted for more than half of the total revenue of the global radiology and cardiology IT market as it continues to move toward integrated healthcare solutions, according to a new IHS Markit report on the subject. And while the replacement market in mature territories like the United States and Western Europe remains competitive, sociopolitical and economic prospects in various emerging markets are less promising and will inhibit growth.

The report, entitled "Radiology and Cardiology IT - 2017," offers a close look at the global radiology and cardiology IT market's various strengths, weaknesses, opportunities, and potential barriers. It evaluates trends sweeping the market, provides revenue forecasts through 2021, and supplies detailed information on the six largest sub-regional territories-the United States, the United Kingdom, Germany, France, China, and Japan-as well as on 49 sub-regional markets.

The small list of five companies dominating the worldwide radiology and cardiology IT markets indicates prevailing high concentration in the hands of a few. Together Chicago-headquartered GE Healthcare, Belgian-German conglomerate AGFA Healthcare, Philips of the Netherlands, Nashville-based Change Healthcare (formerly McKesson), and Fujifilm of Japan held 57% of the market's total revenue of $2.8 billion.

Within the market, the largest segment is radiology picture archiving communication systems (PACS). An electronic imaging technology that eliminates the need for physical materials when storing or transmitting medical data, radiology PACS took in a whopping 78% of the total in global radiology and cardiology IT revenue.

Among regions, the combined markets of North America and Western Europe represented more than three-fifths of worldwide revenue, but saturation is driving vendors to focus on differentiation in products and services. Meanwhile, a dearth in robust IT infrastructure and limited bandwidth is hampering growth in emerging markets in parts of Latin America, Africa, and Southeast Asia. The Asia-Pacific market, however, is projected to enjoy the fastest growth in both radiology IT and cardiology IT in the years to come.

The developments in the global radiology and cardiology IT space are part of a larger worldwide trend in which healthcare providers are adopting newer, more innovative, and integrated healthcare IT solutions. Providers are hoping to increase efficiency in operations while also attempting to address continually changing reimbursement models.

Moving forward, key market opportunities will lie with managed services to cater to user demand, as well as with scalable, lightweight solutions; lengthier and more comprehensive contracts; and integrated solutions that are interoperable with business analytics platforms. In the short term as providers look to replace siloed solutions, IHS Markit predicts that the interoperability of a solution will have the biggest bearing on procurement decisions.

For more info:

Nile Kazimer is an Analyst, Healthcare Technology, within the IHS Technology Group at IHS Markit
Posted 11 September 2017

Robotics, robots and technology: a simplified overview of a vast subject

$
0
0

In the past few years robotics has gone through astonishingly rapid development thanks to the overlap of various factors, including affordable high-powered computing, compact and mobile components, Big Data machine-learning, and low-cost 3-D manufacturing.

These technologies have led to a new wave of innovative robot designs. And where previously they would have been cumbersome, ineffective or dangerous, robots are increasingly utilized today in consumer and professional applications alike.

The global market for industrial robots

At the fundamental level, the development of current robotics-defined as the underlying technologies associated with robots-has less to do with the physical actuation and operation of devices. Instead, robotics development has more in common with computerized control and the development of machine autonomy. Both of these variables, in turn, are closely associated with increased machine perception via sensors, as well as with logical decision-making based on the recognition of patterns that is a hallmark of machine learning.

The difference, then, between many physical devices of similar form to current robotics lies mainly with two aspects. The first is that unlike other comparable devices, robots have the ability to provide feedback to their human controller, using sensors to create haptic-i.e., touch-or remote input. The second is that robots are able to process sensory data in order to make decisions on how to execute tasks, and in more advanced cases, be able to also define and elucidate the nature of the task to be performed.

Robotic intelligence

Intelligence in robotics is a conceptually complex. Even the task of defining intelligence is a matter of much debate encompassing other abstracts, such as understanding, learning, reasoning, and meaning. A quick review of most dictionaries easily finds circular definitions, such as understanding being defined as perceiving something, while perception is defined as the ability to understand something.

The difficulty when applying these abstract definitions to machine code is that we are too aware of the processes involved, which results in an outcome that would otherwise look like reasoning or understanding. So it becomes very easy to understate or dismiss machine intelligence as distinct and distant from organic intelligence, simply because we already understand the details of the process. If a human codes for the behavior, it cannot be truly intelligent.

This perception is being challenged by self-learning structures, such as the neural network used for machine learning. In these cases the intelligence is not programmed but learnt, and the learning is then applied to other similar tasks, further inducing learning. This process of generating intelligence is analogous to the way that organic intelligence develops through trial-and-error and then repetition, ultimately leading to innovative decisions-that is, decisions that are not predisposed.

Machine learning and deep learning

Perhaps the crucial change in recent machine intelligence is the development of machine learning-more specifically deep learning.

Machine learning uses large volumes of data to recognise patterns based on similar experiences. This method often makes use of neural networks, where probabilistic outcomes are combined to then make assertions about what a particular piece of information represents. The power of machine learning is that it can be applied to any source data, which then opens up the possibility for the rapid acceleration of machine intelligence. The current stage for most artificial intelligence is pattern recognition from large volumes of text or visual data, such as the facial-recognition algorithms on Facebook, the contextual recommendations of Google Assistant, or the natural language processing of speech by Amazon's Alexa.

Recent machine learning systems use a process called deep learning, calling for algorithms to structure high-level abstractions in data via the processing of multiple layers of information. This occurs as machine learning tries to emulate the workings of a human brain.

Progress today in deep learning has been possible thanks to advanced algorithms, alongside the development of new and much faster hardware systems based on multiple graphics processing unit (GPU) cores instead of traditional central processing units (CPU). These new architectures allow faster learning phases as well as more accurate results.

Artificial intelligence

Artificial intelligence is a broad and poorly defined ideal-the imaginary and moving boundary at which the stages of machine intelligence overreach our current expectations of their capabilities.

At the edge of currently accepted AI definitions are recent achievements in the field, such as the understanding of human speech, the ability to win at highly complex games like the ancient Asian game Go, the ability to recognise faces and objects, and the ability to analyse and spot patterns in huge volumes of data.

Yet each of these is premised on a method that is still relatively explainable. These achievements, while remarkable, also lack a higher order of intelligence demonstrable in abstract, nonmaterial qualities like creativity, understanding, or self-awareness. In all likelihood, it is only a matter of time before these triumphs are considered mere computer know-how and not really AI.

At that point, our expectations will then move even closer toward abstraction, which we currently find harder to define.

Tom Morrod is Research and Analysis Executive Director within the IHS Technology Group at IHS Markit
Posted 18 September 2017


The strategic role of memory in the iPhone

$
0
0

With the official release of the iPhone 8 and 8 Plus on Friday (September 22), all eyes are once again on Apple to see how well the new smartphone fares in the market among consumers and the retail channel alike.

But while it is common knowledge that Apple rakes in substantial revenue for the crown jewel of its portfolio, less well-known is the role played by NAND flash memory as a profit engine and key contributor to Apple's massive coffers, especially during the early iPhone years.

That starring role for memory may no longer be true today, as Apple has shifted its revenue model from a focus on hardware to one heavily reliant on services. Even so, the outsize impact of memory can be understood by analyzing the decade-long historical record behind Apple's total bill-of-materials (BOM) cost for the iPhone.

At the IHS Technology Group within IHS Markit, the Teardowns & Cost Benchmarking Services research has built a deep and extensive database containing information on every single component that goes into the iPhone-and what it costs to make each component-to come up with a total BOM cost for every iteration of the phone.

That database goes all the way back to 2007 upon the release of the first iPhone and has continued to this day, with Teardowns announcing its most recent findings on the iPhone 8 in an official IHS Markit | Technology news release.

Memory's place in the iPhone BOM

A study of the historical information in IHS iPhone teardowns reveals many intriguing details on memory as it relates to the iPhone BOM. Overall, the BOM cost is a major component in determining the cost of goods sold (COGS), itself an important measure in calculating the average selling price (ASP).

The IHS Markit graphic below shows the share occupied by memory in the iPhone's total BOM cost throughout the l0 years that the device has been on the market.

iPhone Evolution of Design and Cost

The original iPhone in 2007 contained just 4 gigabytes (GB) of memory and was the second-most expensive component in the iPhone BOM after the phone's display. At $48, the memory cost-per-gigabyte (cost/GB) figure for the 4GB iPhone came out to a sizable $12 per gigabyte on average.

By the following year, market forces had caused the cost of memory to drop significantly. Historical IHS Markit Teardown data shows that the BOM for memory in the iPhone 3G in 2008 amounted to just $16-a huge $32 drop from the previous year.

At the same time, Apple boosted the base memory capacity in the 3G to 8GB. Together those two factors-a plunge in the price of memory, along with a doubling of the iPhone's memory-paved the way for a spectacular reduction in memory cost/GB.

From $12 in 2007, memory cost/GB was now just $2. By successfully curtailing its outlay on memory, Apple was also able to slash the iPhone BOM from more than $230 to approximately $177.

Memory as a profit engine

In the early days of the iPhone, memory occupied a seat front and center in Apple's profit-generating strategy. This was no secret, as Apple capitalized on the decline in industry memory prices to cut its own memory BOM costs-all the while continuing to charge consumers a hefty premium via the iPhone's lofty retail pricing.

Between 2009 and 2015, from the 3Gs to the 6s, the iPhone featured only incremental upgrades in storage capacity, as shown in the IHS Markit graphic below. Yet consumers could expect to pay up to an additional $100 in price premium to avail of the extra memory.

In fact, however, memory had a variable cost of less than 20% of the retail price premium, enabling Apple to reap handsome profits from the lopsided equation.

Apple's Profit Engine

Evolution to a new model

With hardware no longer a revenue driver, memory plays a different role today in the Apple profit playbook, especially as Apple has shifted focus to services as its new profit engine.

Last year with the iPhone 7, the base memory configuration for the phone rose to 32GB, up from the base 16GB level that had remained in place for seven years from the iPhone 3Gs to the iPhone 6s. The larger base memory is no accident, instead representing for Apple a carefully calibrated mechanism that allows consumers to make optimal use of Apple's own incredibly rich ecosystem.

By doubling the iPhone's memory density, Apple is giving consumers a clear signal to utilize with confidence the vast storehouse of apps, music, books, and other media residing in the Apple App Store and iTunes. And should their phones run out of storage capacity, consumers are reminded that they can take advantage of fee-based Apple auxiliary services like the iCloud.

Offering users today a memory-boosted iPhone at the forefront of consumption, Apple is able to happily monetize each ensuing consumer transaction to its own economic benefit.

For more information:

IHS Technology Group at IHS Markit
Posted 26 September 2017

First Apple Watch with cellular will benefit from a strong mobile signal

$
0
0

The release of the new Apple Watch Series 3 is making noise in the marketplace because it's the first Apple Watch with built-in cellular connectivity. Apple promises Series 3 users the ability to "stay connected when you're away from your phone."

In practical terms, this means that Series 3 users can make calls, send texts, stream music, and more, entirely with their watch. Previous versions of the Apple Watch required an iPhone in close proximity for all wireless connectivity. The LTE version of the Apple Watch Series 3, however, allows users to switch to mobile carrier networks for wireless connectivity for usage when it's not convenient to carry an iPhone.

A cellular smartwatch has a much smaller single antenna than that of a smartphone, and as a result will be less sensitive to weaker signals than larger mobile devices with antenna diversity. Apple has placed the antenna underneath the watch display, demonstrating tremendous innovation in keeping the slim form factor of Apple Watch. Existing cellular smartwatches such as Samsung Gear and LG devices rely on added volume to accommodate LTE antennas or embed them into the bands of the watch. Additionally, the Apple Watch Series 3 Cellular supports a smaller selection of mobile network bands, or frequencies, than modern smartphones.

The impact of these differences is that not all networks are created equal and not all Series 3 Cellular users will experience the same level of connectivity. Especially, at the cell edge, Apple Watch could encounter difficulty holding on to weak LTE signals, presenting a sub-par mobile performance relative to that of the companion iPhone.

Connectivity will differ, depending on carrier and location

While the Apple Watch Series 3 promises many benefits (particularly for those who want to stay connected while exercising), the ability to connect and the quality of connectivity will vary, depending on the mobile network associated with the watch, as well as a carrier's performance in a particular location. In order to utilize the connectivity features of the Series 3, users must pay an additional fee to their mobile network, and the carrier must be the same as that of the user's iPhone.

In situations where a user's iPhone is not in close proximity to the watch, not all US carriers will deliver the same level of connectivity. With AT&T or T-Mobile, the Apple Watch Series 3 uses either LTE or 3G for connectivity. However, the Series 3 does not support the 3G technology of either Sprint or Verizon. Instead, the watch will only connect to LTE to make Voice over LTE (VoLTE) calls or send texts with Sprint or Verizon.

Within major metropolitan areas, this difference in connectivity among carriers might not make a noticeable difference. RootMetrics, an IHS Markit company, has measured mobile network performance under real-world conditions for many years and has collected hundreds of millions of test samples across the 125 most populated metropolitan markets in the US, within each of the 50 states, and more. In the first half of 2017, RootMetrics assessed the network technology capabilities of the four major US carriers during testing in metropolitan markets and across the 50 states. RootMetrics testing shows that all four carriers offer a strong LTE footprint within the 125 largest US metro areas.

Outside of metro areas, however, users could see different performance, with coverage gaps particularly important to users in rural areas. This is where provisioning for the Series 3 to access both 3G and LTE networks becomes more noticeable. AT&T users will be able to take advantage of both 3G and LTE networks with the Series 3 watch, and AT&T's coverage in rural areas outside of metropolitan markets is over 90% on 3G and LTE combined. Verizon users won't be able to access 3G, but Verizon's LTE footprint is robust and covers over 90% of the rural and non-metro areas RootMetrics has tested.

The story shifts, however, when considering the potential impact on Sprint and T-Mobile. The LTE footprints of Sprint and T-Mobile aren't as large as those of AT&T or Verizon outside of metropolitan markets. RootMetrics testing suggests that across the more rural areas outside of metros, both T-Mobile's and Sprint's LTE footprints are at around 67%. Since 3G support is not available for the Series 3 on Sprint's network, this means there could be locations outside of metro areas where coverage becomes problematic for owners of the Series 3. T-Mobile users, on the other hand, will be able to take advantage of 3G service when LTE is not available. For T-Mobile users, LTE coverage plus 3G coverage (much of which is provided by an agreement with AT&T in rural areas) provides a footprint close to 82%.

The above scenarios will not affect the vast majority of Series 3 users, who are likely to either have their iPhone nearby and/or will be using their watch within metropolitan areas. And, for both Sprint and T-Mobile, any performance hiccups due to coverage should ease as their LTE footprints continue to expand into rural areas.

Keep in mind that RootMetrics testing figures are averages based on millions of test samples collected across the 125 most populous US metropolitan markets and throughout each of the 50 states. That said, those figures offer directional guidance for each network's LTE capabilities. Real-world Apple Watch Series 3 cellular results will vary, depending on how good-or poor-each network's service is in any given location, as well as on the real-world performance of Apple's innovative embedded display antenna design.

With the growing number of devices connecting to mobile networks as more and more objects become smart, connected, and a part of the Internet of Things, the quality of mobile network coverage becomes even more important. The proliferation of connected devices, including IoT devices like smartwatches, smart meters, eReaders, and more, require strong network connectivity in order to ensure a good consumer experience.

To learn how the carriers performed in specific US metro areas, within each of the 50 states, or across the US as a whole, view the RootMetrics series of RootScore Reports, which characterize network performance under real-world mobile usage conditions.

IHS Technology Group at IHS Markit
Posted 27 September 2017

Achieving video business success for telcos

$
0
0

An event the scale of Amsterdam's annual International Broadcasting Convention (IBC) tradeshow-this year attended by a record 57,669 industry professionals-tends to leave no stone unturned in its demonstration and examination of the latest technology shaping the TV and video industry. Conversations with vendors about their latest wares and the direction of the industry are at the core of the event, but less common are discussions with executives on the operator side of the business about the issues they are facing. IHS Markit was, therefore, pleased and proud to be the sole analyst firm present at a special behind-closed-doors roundtable of pay-TV video executives organised by China's Huawei Technologies.

On hand to discuss the topic of video business success for telcos were representatives from a diverse mix of major pay-TV operators from some of the largest markets in Europe, Latin America, and Asia. Given the privilege of setting the scene for an extended discussion, IHS Markit presented its analysis on the evolution of the telco video business. As part of our presentation, we identified the 50 leading telco video groups by subscription and the performance benchmarks they have set.

IHS Markit top 50 telco video providers

IHS Markit top 10 telco video providers

Furthermore, we categorised, examined, and detailed the winning strategies that have characterised the success of these telcos.

IHS Markit routes to success for telco video operators

The analysis presented at IBC, and briefly outlined above, represents a preview of forthcoming deep-dive research to be published in a white paper, Video as a Core Service for Telcos: Analysis of 50 Leading Operators in Achieving Video Business Success, in November 2017.

Opportunities and challenges for telco video operators

The insights shared by operators in the roundtable discussion shed light on various aspects of the telco video business, including cause for optimism as well the challenges faced in the changing landscape.

One consensus-a positive-was that telcos, as providers of access and not simply content alone, are well-placed to retain their position in the multiplay market. As long as consumers want digital content-whether video, music, or online games-they will need to pay for access to networks in order to get it. And, echoing the view of IHS Markit, the operators in attendance generally saw video as a core service that represents the most marketable hook for attracting customers to their broadband and mobile offerings. Indeed, it was video's indirect contribution to the broader telco business, in terms of its role as a bundle draw, that was highlighted by some as being more important than the revenue generated in isolation by videos.

Some operators are being more aggressive than others in making video a core service for their customers, pushing alternatives to traditional pay TV in the form of flexible online video offerings bundled with mobile and/or broadband. Many are even completely unbundling access to video via standalone online services, to ensure that they have as many customer relationships as possible for ongoing cross-selling and upselling efforts-again, underlining video's importance as an indirect revenue generator.

Assessing shifts in the content rights landscape, the operators did not feel that fragmentation would threaten their strong position as video aggregators. With the likes of Disney unbundling their channels via dedicated direct-to-consumer (D2C) apps, operators will still be needed as distribution partners for these offerings, it was argued. Such a scenario points to the rise of a new kind of carriage deal, in which operators strike agreements to carry on-demand video apps instead of linear channels, much like they have with Netflix, itself a channel-like online video service.

However, in spite of content owners' ongoing need for telco distribution partners, the D2C trend is still, in the IHS Markit view, a somewhat worrying trend for those operators when it comes to their pay-TV ambitions. A wider, a-la-carte distribution of key subscription video content on alternative platforms (e.g., those of Amazon, Google, Roku, and others) has the potential to marginalise traditional pay TV and undermine its appeal.

As to challenges, one area in which operators agreed they needed to improve is customer data-in better using what they have, as well as in gathering more of it. One executive admitted that after prioritising things like product strategy or customer acquisition and retention, collecting and analysing data was the last thing his company thought about-yet was what they needed to give more attention to, in order to unlock its value.

One problem, though, is the lack of access to data for operators when they work with partners such as Netflix and YouTube. This is because their visibility ends once the customer enters the app, at which point operators do not know how such customers are using the services. This highlights one of the challenges of providing access to third-party apps, compared to linear channels and in-house on-demand services.

While the operators at the roundtable expressed a willingness to work with online content services that either behave like channels (Netflix) or aggregate semi-professional multi-channel network (MCN) content that is not widely carried by traditional TV (YouTube), they were more wary of online aggregators moving into professional programming-specifically, Facebook.

The social network's new Watch service, which has ambitions of hosting long-form, 30-minute TV shows, was identified as posing a threat to pay TV, by providing an alternative platform for operators' traditional content partners. With Facebook, Twitter, Snap, and others ramping up their video growth strategies, they will become increasingly bandwidth hungry.

This might be good news for operators from a broadband and mobile perspective, but is less positive for their video business.

Ted Hall is Research & Analysis Associate Director, Television, within the IHS Technology Group at IHS Markit
Posted 3 October 2017

Interventional X-ray systems to see growth in demand worldwide

$
0
0

The increased prevalence of ailments such as cardiovascular disease and stroke is fuelling demand worldwide for interventional X-ray systems, a medical specialty providing image-guided minimally invasive diagnosis and treatment in order to lessen patient risk.

Caused by an ageing population and behavioural risk factors, cardiovascular disease is the leading cause of death, according to the American Heart Association, accounting for more than 17.3 million deaths in 2013. That number is expected to rise to more than 23.6 million by 2030.

The second leading global cause of death behind heart disease in 2013 was stroke, also a cardiovascular disorder, accounting for 11.8 percent of all deaths worldwide.

To this end, demand will grow for interventional X-ray systems, IHS Markit believes, especially as the systems are harnessed in support of various interventional cardiology procedures, such as transcatheter aortic valve implantation (TAVI), transcatheter aortic valve replacement (TAVR), percutaneous coronary intervention (PCI), and aortic abdominal aneurism (AAA).

Overall, interventional cardiology procedures are being performed more often today because of increased reimbursement by health providers, along with a greater awareness among both medical practitioners and patients of the clinical benefits to be derived. Interventional procedures are also now easier for physicians to perform, helping to reducing patient risk.

IHSMarkit chart of interventional X-ray manufacturers continuing to tailor interventional X-ray systems

In the case of strokes, with the number of cases also growing worldwide, the choice of mechanical thrombectomy as a less aggressive mode of treatment will benefit the interventional neurology market. Here demand is projected to increase over the next five years, with angiography systems preferred, to visualize and guide thrombectomy. These systems need to provide not only uncompromising image quality while having minimal interference with the interventional procedures, but also precise and flexible positioning control for interventional X-ray systems.

In the United Kingdom, more stroke patients will be able to access mechanical thrombectomy as plans call for the treatment to be rolled out to 8,000 stroke patients a year-facilitated by the huge expansion in the number of hospitals offering this procedure, compared to the few that offer it today. As a result, thousands of stroke patients will be saved from lifelong disability.

With the rise in demand for mechanical thrombectomy comes a need to further drive innovation in comprehensive imaging capabilities. And as developments in the field continue, even more complex procedures can be performed. In turn, vendors can refine their devices accordingly, enabling greater ease and safety for better patient outcomes.

New interventional procedures also have a role

Several novel interventional procedures can also be performed as an alternative to drug treatments historically administered as part of the patient care pathway.

For instance, atrial fibrillation-one of the most important risk factors of stroke caused by blood clots that form in the left atrial appendage-affects 33.5 million people globally. Untreated, it can cause 15% of the total number of strokes. While the conventional treatment and prescription for this condition is blood thinners, not all patients can be treated successfully through this method. But because of improved interventional cardiology, it is now possible to perform a minimally invasive interventional procedure known as left atrial appendage closure (LAAC), which seals off a small sac in the heart where blood clots have a tendency to form. Clinical demand for LAAC procedures is likely to take root first in North America, thanks to ongoing innovations in the imaging capabilities of interventional cardiology X-ray systems.

Overall, new technological developments in interventional X-ray systems are allowing more complex procedures to be performed. Interventional suites should, therefore, include tailored features that match the therapeutic requirements of the interventional imaging technique and the skills of the interventional team.

And as patient cases grow in complexity and become more challenging, the need will arise for enhanced visualisation and image quality from interventional X-ray equipment vendors, ensuring that cases are handled safely and efficiently.

For more information on this subject, see our Interventional X-ray equipment 2017 report as part of the X-ray Intelligence service.

Bhvita Jani is an Analyst, Healthcare Technology, within the IHS Technology Group at IHS Markit
Posted 9 October 2017

Sprint and T-Mobile combined: The implications of a merger

$
0
0

The rumored and much-discussed merger between Sprint and T-Mobile would have myriad ramifications for both consumers and mobile carriers alike, creating a company with nearly 130 million subscriptions-a similar scale to that of market leaders AT&T and Verizon. Combined, Sprint and T-Mobile would have the scale and resources to provide stronger network competition to the more extensive networks of AT&T and Verizon. Consumers in both rural and urban areas should experience improved coverage and better mobile performance when the deal is finalized and the network operations are integrated. However, the deal is not as straightforward as it may sound.

While both carriers operate 4G LTE networks, Sprint and T-Mobile operate different 3G network technologies and own very different spectrum holdings. Combining the network assets and spectrum allotments of each network will likely prove a complex and time-consuming undertaking. That could potentially undercut the momentum that T-Mobile has managed to build since the failure of a merger with AT&T in 2011 which handed T-Mobile a breakup fee worth $4 billion, including $3 billion in cash, a spectrum access deal worth an additional $1 billion, and a seven-year roaming agreement. Since then, T-Mobile has more than doubled its subscriber base in a market that has grown by just 25%. Despite often drastic (and expensive) marketing moves, Sprint, acquired by Japanese operator Softbank in 2013 with an intent to merge with T-Mobile, has seen subscription growth stagnate alongside significant declines to revenues and losses in profitability.

Sprint and T-Mobile users in rural areas may benefit the most from the combined network through improved coverage and the resources to invest in the increasingly dense networks that will support 5G. Read on to see what RootMetrics, an IHS Markit company, believes this merger would mean for consumers.

Device support

With Sprint using CDMA for their 3G network and T-Mobile using the GSM/UMTS standard, Sprint and T-Mobile currently sell and support specific devices that work on their own networks, including iPhones in which chipset providers differ (Intel vs. Qualcomm, for example) depending on the network. Current Sprint devices will not have the same experience as a current T-Mobile device on the combined network, and vice versa. That may lead to increased churn as users find that service degrades on the less favored network (likely to be Sprint's CDMA) as the networks are integrated.

In order for users to take full advantage of the merged network's services, once the merger is complete and a network integration plan is clear, OEMs such as Samsung, LG, Apple, and others must begin producing devices that support the features and frequencies of the combined Sprint and T-Mobile network. It's important to note that many current Sprint and T-Mobile devices will still be able to take advantage of certain features of the combined network, and users will not necessarily need a new device.

Coverage

Both carriers have strong coverage in urban areas and metropolitan markets across the US. Indeed, neither Sprint nor T-Mobile exhibited any significant domestic roaming during RootMetrics testing across the 125 most populated metro areas in the US in the first half of 2017.

The current Sprint and T-Mobile networks both utilize low-band spectrum, which provide better signal penetration for challenging in-building locations, as well as strong coverage over large distances. T-Mobile utilizes 700 MHz spectrum and is currently deploying its 600 MHz spectrum widely, while Sprint uses its 850 MHz band spectrum.

In terms of providing coverage outside of metropolitan markets, neither carriers' coverage encompasses as wide an area as that of AT&T or Verizon. RootMetrics testing across each of the 50 states (outside of metro areas) observed that Sprint and T-Mobile often roamed on their competitors' networks domestically. During RootMetrics state testing, Sprint roamed on competitor networks approximately 27% of the time, while T-Mobile's network roamed at a rate of 26%. The RootMetrics tests were performed with the most current consumer devices and represented the real-world consumer experience of using data, call, and text services.

Once the merger takes place, RootMetrics expects that the new combined network will directly compete with AT&T and Verizon for coverage in rural areas, rather than roaming on competitors' networks. However, in order for the combined network to compete effectively with AT&T and Verizon, many new towers will need to be deployed utilizing low-band frequencies (600/700/850). See the maps below for a look at how prevalent domestic roaming was for Sprint and T-Mobile during RootMetrics testing of the 50 states in the first half of 2017, as well as for a look at how roaming is affected when the networks of Sprint and T-Mobile are combined.

Sprint roaming map (RootMetrics State testing, H1 2017)

T-Mobile roaming map (RootMetrics State testing, H1 2017)

Sprint and T-Mobile combined roaming map (RootMetrics State testing, H1 2017)

As these roaming maps suggest, the combined Sprint and T-Mobile network will eliminate a large portion of domestic roaming, down to roughly 16%. In some areas, however, new towers will be required in order to provide coverage. It's also worth noting that T-Mobile is currently in the process of deploying its 600 MHz band spectrum widely across the US. The implications of this deployment suggest coverage would increase from 315 million to 321 million on a population basis, but the difference in coverage on an area basis would be more significant.

Spectrum assets

Sprint is well known for its rich spectrum assets, particularly on the 2.5 GHz TDD spectrum. Sprint is effectively using that spectrum for 3-carrier aggregation (3CA) on its LTE network, which allows high-peak downlink speeds to be achieved. Sprint has also been deploying its 2.5 GHz TDD spectrum for wireless back-haul purposes, which should give Sprint a distinct advantage over other carriers by eliminating reliance on wired networks and reducing backhaul costs. This spectrum, which would have been considered high band in previous generation deployments, is also well suited to future 5G deployments, which are set to be increasingly dense and used with higher spectrum bands. Furthermore, Sprint owns frequencies in the 850 MHz and 1900 MHz FDD bands, which help with both in-building and rural coverage.

T-Mobile, meanwhile, currently has spectrum holdings in the 600 MHz, 700 MHz, 1900 MHz, and AWS bands. And as noted above, T-Mobile is currently in the initial stages of deploying its 600 MHz spectrum across the US.

In short, the combined networks' spectrum assets will be both deep (large bandwidth) and varied (low and high bands) and should be able to effectively compete with AT&T's and Verizon's spectrum assets for use with 4G LTE, 5G, and IoT.

Network integration

Integrating networks has not always gone smoothly. For perspective, consider the <span/>Cingular Wireless and AT&T Wireless Services (AWS) merger that occurred in late 2004. That merger was much simpler than what we expect to see between Sprint and T-Mobile. Cingular and AT&T had similar network infrastructures; both were GSM-based, but integrating the two networks ultimately took several months to complete.

Sprint and T-Mobile each have much more complicated networks today compared with the seemingly "simple" Cingular and AT&T Wireless networks of 2004. As one example of the differences between Sprint and T-Mobile, consider that T-Mobile supports VoLTE while Sprint carries its voice traffic on 1xRTT. Moreover, T-Mobile is GSM-based and Sprint is CDMA-based, and the two carriers do not always share the same infrastructure vendors in the same regions of the country.

In urban and suburban areas, both carriers' coverage has significant overlap. The integration of coverage, including tower-by-tower decisions on which towers to keep or which to forfeit, will be a lengthy process that will likely span several months. Furthermore, both companies have different approaches to in-building, small cells, and IoT coverage that must be aligned.

On the other hand, earlier mergers have provided valuable lessons that Sprint and T-Mobile might draw upon. Learning from the example of Canadian and Korean operators as they prepared their networks for 4G, Sprint and T-Mobile might decide to shut down EV-DO, leave voice on RTT for a while if necessary, implement HSPA in the EV-DO band and then bridge it to LTE. This would simplify the integration process.

There is also the possibility that both networks are simply maintained separately. Both T-Mobile and Softbank have experience running disparate networks and could choose that same path after the merger. For instance, in Japan Softbank runs three brands with three different spectrum assets: Softbank Mobile uses W-CDMA and FD-LTE, WCP runs on TD-LTE, and Y!Mobile takes advantage of both TD-LTE and FD-LTE.

Expected outcome

RootMetrics expects that the new company and combined network would be able to compete more effectively with Verizon and AT&T on several fronts, including delivering better coverage in rural areas than either Sprint or T-Mobile alone, improved performance in both urban and rural environments, strong nationwide IoT coverage, and a more seamless deployment of 5G technology.

IHS Technology Group at IHS Markit
Posted 12 October 2017

Viewing all 327 articles
Browse latest View live