Quantcast
Channel: S&P Blog
Viewing all 327 articles
Browse latest View live

Grid defection as attractive consumer alternative to reduce energy costs

$
0
0

On the back of rapidly decreasing costs for energy storage and solar photovoltaics (PV), consumers wishing to achieve a low-cost and reliable supply of power are considering grid defection—or at least, partial grid-defection—as an increasingly attractive alternative. But while lower energy costs are certainly welcome news for end customers, the same cannot be said for incumbent utilities, many of which will be facing new challenges that call into question their long-standing business model of selling and distributing electricity.

To be sure, grid defection is a term used largely in North America. Yet it is in Europe where the residential solar and energy storage markets have enjoyed greater penetration over the past few years. For instance, Germany alone already has more than 60,000 residential battery storage systems installed today, compared to less than 10,000 grid-connected residential energy storage systems in the United States.

This article, then, will focus specifically on the economics of grid defection in Europe today and in the future, and will also examine how such a development could impact the energy industry.

Grid defection: a competitive tool for customers

In Europe the economics for going off-grid are improving, but practical considerations make grid defection unlikely to be a major disruptor to existing traditional and entrenched systems of consuming power. Even so, partial grid defection—especially a scenario in which PV is coupled with a battery energy storage system (BESS)—is seen by customers as an increasingly attractive option to hedge against future rises in retail power prices.

As part of the latest report published by the IHS Markit Energy Storage Intelligence Service, we carried out in-depth modelling using hourly demand profiles to compare the levelized cost of energy (LCOE) under four scenarios: future retail rates without any onsite generation; a solar PV system alone; a combination of solar PV and battery energy storage to maximise self-consumption; and a full grid-defection scenario equivalent to 100% self-consumption. The term LCOE, which compares the relative cost of energy produced by different energy-generating sources, is often touted as the new metric in evaluating cost performance of PV systems.

The modelling showed that:

  • Partial grid defection using solar PV alone is already competitive with retail rates in the four countries analysed—namely, Germany, Italy, the Netherlands, and the United Kingdom.
  • When analysing the LCOE offered by a combined PV and battery storage solution, IHS Markit shows that this case has already reached parity with retail rates in Germany and Italy, and will do so in the United Kingdom by 2023.
  • Complete disconnection from the grid will not become economically attractive until at least 2025, IHS Markit expects. 
  • For battery storage, the so-called sizing sweet spot that is also the most economically attractive lies between 5 and 8 kilowatt-hours in energy capacity for the BESS

Regardless of the economics involved, we do not expect a scenario in which customers are fully disconnecting from the grid or achieving 100% self-sufficiency to become reality in Europe. This is due to both the technical impracticalities stopping such a scenario from taking root, and also because a widely stable electricity grid is already in place that makes complete grid defection unnecessary.

Nonetheless, IHS Markit expects that more than a million residential battery energy storage systems will be installed in Europe by 2025, driven primarily by customers looking to take control of their energy supply and to hedge against future price uncertainty.

New business models offer virtual solution toward 100% independence

As customers look to minimize reliance on the grid as much as possible, they need either active demand-side management or an alternative to consider new business models. A number of players, including the likes of Sonnen, E3DC, or Beegy, have started to act as electricity suppliers, offering customers the capability to achieve so-called virtual independence, through the mechanism described below.

  • In the most common scenario, customers will feed excess solar power into the grid, but the solution provider will meter the electricity and add it to the customer’s account. During times when the customer is unable to generate enough solar power, they receive this electricity back from the supplier.
  • Alternatively, some suppliers are offering flat-rate electricity tariffs. Here the customer—with solar and storage installed—pays a fixed fee, which covers all electricity demand that is not self-produced (regardless of consumption levels).
  • Other players offer Virtual Power Plant solutions, which provide additional revenue to customers by aggregating distributed storage systems to participate in ancillary services markets.

The challenges of decentralised generation and self-consumption

Irrespective of the extent to which customers defect from the grid, or whether they choose new virtual self-consumption models, the shift toward the so-called energy prosumer fundamentally challenges the incumbents—electricity suppliers and network operators alike. In fact, the share of self-consumed power as part of total electricity demand is expected to double by 2020.

Still, IHS Markit predicts that utilities will play a major role in the growing energy storage market. Already, 7 out of the 10 biggest electricity suppliers in Europe, such as E.ON and EDF, are already commercialising propositions related to behind-the-meter energy storage. 

Personally I believe the trend toward grid defection in order to increase energy independence won’t stop. We see strong economic and emotional drivers supporting this shift. At the same time, utilities are clearly starting to understand that the energy landscape is shifting around them, and that they will be able to play a crucial role—not only in commercialising new business models, but also in binding customers through energy storage.

For more information on this and related topics, visit our Energy Storage Intelligence Service.

Julian Jansen is Senior Analyst, Solar, at the IHS Technology Group within IHS Markit
Posted 24 July 2017


TV content and cord-cutting: conflict on the wire

$
0
0

On TV, content is crucial to retaining customers. Without fresh and satisfying high-quality shows, customers will inevitably look to other platforms for their entertainment needs. Traditionally, TV content has adhered to a fairly rigid structure for monetization via carriage fees and broadcaster advertising. Movies also benefit from the pay-TV ecosystem, but often an initial box-office run is enough to ensure the individual profitability of films.

For US pay-TV-video subscribers, many are chafing at the weight of monthly package prices, with average revenue per user (ARPU) hitting a staggering $92.30 in 2016, IHS Markit estimates. At the end of the same year, 1.3 million consumers ended their subscriptions to monthly cable services provided by US operators like Verizon, Comcast, Time Warner, and ATT. These subscribers who “cut the cord,” so to speak, are known in the industry as cord-cutters.

In comparison to last year, there were 1.8 million cord-cutters in 2015, when ARPU was also lower, at $89.34.

And cord-cutting is going to continue, given the steady increase in ARPU that can be expected. In turn, this will result in less money available for content creation—unless ARPU increases can drive revenue faster than it could be lowered by a shrinking subscriber base.

To a certain extent, the increases in pay-TV ARPU will make up for the loss of revenue due to cord-cutting and will be the norm through 2021. Even so, it seems inevitable that ARPU growth alone won’t be able to keep pay-TV-video revenue growing indefinitely.

Some may argue that existing content businesses could have another revenue stream, in the form of over-the-top (OTT) video offerings such as Netflix. But can these successor services serve as a direct replacement for the existing pay-TV-video businesses?

The answer is no, as OTT offerings simply don’t generate enough revenue to sustain the content creation ecosystem. In 2016 Netflix reported worldwide streaming ARPU at $8.61 per month and revenue of $8.8 billion—far less than what would have been required to support the diversity of content currently found on pay-TV video. In comparison, Comcast’s monthly ARPU the same year was $83.19, with the company spending $11.6 billion on programming in order to bring in $22.4 billion in video revenue. 

Is the channel business under threat of imminent collapse?

Again, the answer is no. However, what’s likely to occur over the next decade is a reduction in the number of channels overall. Today, we’re seeing some evidence of what is just the tip of the iceberg, and IHS Markit expects that there will be major changes in the channels being offered by the biggest US channel groups in the next decade.

In February 2017, following the release of its 2016 earnings, Viacom announced it would be refocusing its efforts on six key brands: MTV, BET, Nickelodeon, Nick Jr., Comedy Central, and Spike. The announcement doesn’t necessarily mean the closure of channels like TV Land, Bet Jams, Centric, or CMT, but it does mean that the smaller and ailing channels won’t receive the same attention and budgeting that they have enjoyed in years past.

Is this just a Viacom problem?

No. Like Viacom, the majority of the biggest US channel groups will also have several cable networks in their portfolio that will be underperforming and are thus ripe for reductions in both programming and marketing spend. The incursion of Netflix and other OTT solutions will have an even more disastrous effect on the long-term viability of independent channels. IHS Markit believes that pay-TV operators will be forced to pay carriage fees to major channel groups before they are able to allocate any payments for independent channels.

So, will Netflix kill the content industry?

Netflix and other OTT pay-TV alternatives won’t be the death of traditional pay-TV-video service in the United States. However, these OTT options are contributing to the general malaise of the traditional pay-TV-video space.

Still, as more video is being delivered through the open internet than ever before, creative solutions are also being developed that may allow content owners to neatly sidestep issues involving the reduced number of pay-TV-video subscribers. For instance, monetizing the use of data networks and bundling with third-party mobile and broadband packages have already proved effective in some markets.

Just the same, the prognosis isn’t encouraging. While further solutions are likely to present themselves, the forecast loss of another 3.9 million US pay-TV-video subscribers between 2017 and 2021 points to a turbulent near future for the television industry.

Erik Brannon is Principal Research Analyst, Television Media, at the IHS Technology Group within IHS Markit
Posted 31 July 2017

Virtual Reality in retail: the case for business drivers

$
0
0

In the abstruse technological arena of virtual reality (VR), there is no shortage of hype or outsize conjecturing. True, the format is now widely accepted by the games industry, where immersive experiences serve to enhance many applications. But how applicable is this success to other industries?

For instance, do home-shopping channels and the ecommerce sector need to be prepared to invest in VR technology? Will companies gain a competitive edge if they have a VR app?

To discuss these topics, I recently had the pleasure of moderating a panel during the Multi-Channel Money Streams (MCMS) Congress, held on the occasion of the Electronic Home Shopping Conference this past June in Venice, Italy. Joining me in the panel were three industry leaders: Richard Burrell, previously at QVC, the world's leading video and ecommerce retailer; Maurits Bruggink from EMOTA, or the European ecommerce and Omni Channel Trade Association; and Alex Kunawicz from Laduma, a Virtual Reality content company. The event offered a unique opportunity to hear from industry experts and thought leaders on topics expected to affect the future of the modern multi-channel industry.

The first panel in the conference focused on the subject of business drivers of virtual reality, as well as on the practical implications of VR for the retail industry.

Based on the findings of a 2016 report that we published on the subject, IHS Markit believes that VR remains a niche technology—not only for now but also for a good number of years moving forward. There are several reasons, but in particular, the added expense of specialized headsets, along with the need for users to possess a technical understanding of the format, means that VR is not yet consumer friendly or ready for the mass market.

There is also the issue of fragmentation and format discoverability, both of which do not lend well to consumer compatibility. For instance, while there are many notable applications already available for virtual reality, most consumers are still not aware of them. And many of the current owners of VR headsets do not know where to find content, or even what platforms are available and compatible with their particular brand of headset.

In order, then, to make VR more successful, investment is needed in creating content, standardising technologies, raising consumer awareness, and educating the consumer, all serving to underscore the benefits of using VR as well as the availability of the technology.

And yet, an important question must be addressed: who should be leading the charge toward a VR future?

The big guns duke it out

The likes of Facebook, Google, and Samsung are heavily investing in VR, but which of them is, in fact, going to push things forward?

In the case of Facebook, it has invested $2 billion on its Oculus headset offering, and such a large investment from a ubiquitous global content platform implies a desire make the technology as popular and widespread as possible—far from what is today’s niche user base. To expedite matters, Facebook will be launching live video streaming to its social VR product, called Spaces, which lets users operate avatars that hang out with the avatars of other users in a virtual world. From all appearances, Spaces appears to be not only an innovative application that meshes well with the company’s social media roots, it also marks Facebook’s intent to create awareness and education of its initiative and make VR more popular in general. The numbers tell a daunting story: while Facebook has currently a global subscriber base of approximately 2 billion, less than 1% of the social media giant’s base owns a VR headset.

Current opportunities: but who controls the VR headset market, anyway?

Smartphone VR makes up the vast majority of headsets, counting an installed base in 2016 of almost 16 million units, equivalent to 87% percent of the VR addressable market by device. By 2020, however, this share is forecast to erode to 53% as other platforms find traction. Even so, smartphone VR will rule as the biggest addressable market for VR content.

 

Among VR players, Google’s Daydream will become the dominant smartphone VR platform by 2019, reaching a projected 14 million units by then. IHS Markit believes that Google’s next-generation VR platform circulating by that time will eat into Samsung’s Gear VR headset, directly impacting the market fortunes of Samsung’s offering.

Across the high-end VR headset market, Sony’s PlayStation VR is expected during the early part of our forecast to outsell PC-based VR headsets, including Facebook’s Oculus Rift and Taiwanese-based HTC’s Vive. By 2019, the installed base of PC-based headsets will overtake the game-console-based PlayStation VR, as adoption of Sony’s headset slows in tandem with the aging PS4 sales cycle. Overall, however, consumer adoption of VR across all platforms will be slow as buyers take their time in evaluating the technology. 

Investing in VR: should you do it?

So, apart from Facebook’s “Spaces,” is VR applicable to just games and experiences? In fact, there have been some successful applications of VR in other markets—such as the retail industry. Some examples of VR use in retail include the following:

  • Customising of cars as well as bespoke automotive products (Audi)
  • Virtual walk-throughs and remote selling in real estate, hotels bookings
  • Interior design and decor (IKEA)
  • Ecommerce: Chinese ecommerce retail giant Alibaba has a VR retail app called Buy+ that lets consumers shop in Alibaba’s virtual reality universe, enabling China’s vast consuming public to explore big shopping centres across other parts of the world.

But while VR can be deployed, as shown above, in fresh and innovative ways for some products, this does not mean that the VR experience is universally applicable, especially as the marketing hype of the format or the novelty factor begins to wear thin.

When deciding on whether to invest in VR technology, companies will need to consider two important factors: the big investments required by VR, as well as the less-than-certain revenue prospects or return that can expected from an investment in the new technology. For instance, is direct monetisation through the use of VR a realistic expectation for a company to justify investing in the technology?

For many companies contemplating a VR venture, the hope is to avoid repeating the mistakes made during the introduction of 3-D technology. Just because you can do it doesn’t mean you should. In the television sector, many players that had invested in 3-D did so in the belief that the new and exciting technology was the way to go, and that not investing meant being left out or appearing old-fashioned and out of touch. Fast-forward to today, and 3-D live channels launched by companies and operators have mostly closed down. For their part, TV brands have all but abandoned the production of 3-D television sets.

The latest analysis from IHS Markit shows that VR is already performing better that 3-D TV ever did.

For those of us in the panel at the MCMS Congress, we’ve come to this conclusion following a very lively session: When it comes to investing in VR for retail and ecommerce, it is still too early at this point to get a clear sense of what to expect in terms of a return on investment, and how much companies should invest at this point remains uncertain. But as is the case with all new technologies, it is unquestionably important to remain aware of and be up-to-date with developments taking shape in the VR space, which would then allow for a considered and intelligent assessment of what VR could bring to your business.

For more info:

Maria Rua Aguete is Executive Director for Media, Service Providers & Platforms, at the IHS Technology Group within IHS Markit
Posted 14 August 2017

Global Display Conference to present far-reaching view of display industry

$
0
0

IHS Markit is hosting “Global Display Conference 2017,” a two-day forum on important current and emerging issues affecting the worldwide display supply chain.

The event, to be held 19-20 September at the Hyatt Regency San Francisco Airport, brings together industry leaders, analysts, and subject-matter-experts in various planned sessions throughout the conference to offer attendees valuable perspectives unique to the display landscape, including:

  • Analysis and insight on supply chain implications, and moves by Chinese brands to expand their presence worldwide
  • Challenges and solutions in touch technologies
  • OLED display technology evolution and trends for end-market applications, including TVs, smartphones, and computing
  • New technologies and their likelihood for market adoption
  • Automotive technology and its prospects for both near- and long-term

Day One sessions

Kicking off the conference on Day One is David Hsieh, director of research and analysis for displays at IHS Markit, in a keynote session that will include companies from the display component, panel-maker, and semiconductor industries. The keynote is expected to touch on a number of themes, such as major display market and technology trends, industry long-term growth, consumer preferences, and the continuing evolution of display technology. Joining Mr. Hsieh at the keynote session will be speakers from Intel, Corning, Canon, and BOE.

Following the keynote is the OLED & Flexible Displays session, to be led by Jerry Kang, principal research analyst for displays at IHS Markit. Joining Mr. Kang in discussing technology, innovation, and advancement in OLED and flexible displays will be speakers from Cynora, Universal Display Corp., and Kateeva.

The third session on Day One is devoted to examining the state of the global television market, amid a quickly shifting media landscape and facing significant challenges today in achieving continued growth and profits. Paul Gagnon, research director for consumer devices at IHS Markit, will be joined by speakers from Nanosys and LG Electronics.

Day One will end with a collaborative session on touch screens and interactivity, to be led by Calvin Hsieh, assistant director of research and analysis for displays. The session will explore the various approaches utilized by panel manufacturers to make displays interactive alongside their display-based user interfaces, including in-display fingerprint sensing. The companies in this session include FlatFrog, WaveTouch, Synaptics, and Qualcomm.

Day Two sessions

Day Two will feature a clutch of industry experts that closely monitor the entire global display spectrum, covering automotive displays, display manufacturing equipment and materials, digital signage, mobile PCs, and desktop displays.

Mark Boyadjis, principal analyst and manager of automotive user experience at IHS Markit, will lead the session on automotive displays. Mr. Boyadjis will speak on display trends—from novel technologies in display panels to innovative ways with which car occupants can interact with the vehicle. The session will also cover the human-machine interface (HMI) as it applies to automotive, with discussions on technical innovations in HMI, as well as best practices in UX—or interface—design as the industry works to renovate the in-vehicle user experience. The companies participating in this session are Delphi, Rightware, and Ultrahaptics.

The next session on display manufacturing equipment is timely, given that the display industry currently undergoing the most significant changes in manufacturing and display technology since the mass production of flat-panel displays in the 1990s. Charles Annis, senior director of display manufacturing technology at IHS Markit, will examine how evolving display technologies are necessitating new and advanced manufacturing approaches. Joining Mr. Annis are three leading supply chain companies: Applied Materials, Plansee, and Mycronic.

In the sphere of public displays, the sector is poised to benefit on the back of lackluster consumer television sales, with revenue for public displays projected to enjoy a stellar compound annual growth rate of 18.2% from 2017 to 2021. Similarly, the market for fine pixel pitch LED video displays is forecast to scale new heights this year as revenue is estimate to reach $436 million this year, up a heady 82% from 2016. Sanju Khatri, director of digital signage and professional video at IHS Markit, will lead this session on digital signage. Also on hand will be three companies—Elo Touch Solutions, E Ink, and SiliconCore Technology, Inc.—to explore the various applications and technology trends enabling such growth:


For the final session of Day Two, Rhoda Alexander will chair the Mobile PC and Desktop Displays session, exploring how form and function are changing usage patterns in these sectors, and vice versa. To be sure, the personal computer market is increasingly diverse, spanning tablets, notebook PCs, and desktop systems. And crossing home, office, classroom, and mobile, Dolby Laboratories will discuss market shifts and look into both future opportunities and risks for display and system providers alike.

View the detailed agenda covering the entire two-day conference.
 

Registration particulars

Registration to attend the two-day event is starting now. Early bird registration, offered at a reduced fee of $599, ends 1 September; regular registration is $799. A discount of $99 is available for groups of two or more; all attendees must be registered at the same time for the discount to apply. A 10% discount is also available to previous attendees of the Global Virtual Event, held 16 May.

For more information or for assistance on registration, contact us.  

IHS Technology Group, IHS Markit
Posted 28 August 2017

Wi-Fi use in automobiles continues to build and grow

$
0
0

Over the past few years, the use of Wi-Fi technology in consumer automobiles has been steadily growing, with telematics units that offer broadband data leading the way in the wide adoption of in-vehicle Wi-Fi hotspot services, particularly in the United States. And in deploying Wi-Fi technology, infotainment system vendors are beginning to introduce wireless smartphone projection technology that no longer requires USB connection.

Given these developments, IHS Markit projects that the total number of Wi-Fi-enabled automotive devices—infotainment systems and telematics systems combined—shipped in light vehicles will grow from 13.8 million units in 2016 to 47.3 million in 2021, equivalent to a compound average annual growth rate (CAGR) of 28%.

Automotive Wi-Fi-enabled shipmenrts by device type

In-vehicle Wi-Fi hotspot service gaining traction in North America

Although in-vehicle Wi-Fi hotspot was introduced in the United States as early as 2008, a more extensive implementation of broadband Wi-Fi hotspot service in automobiles did not occur until 2014. That was when GM OnStar introduced its 4G LTE Wi-Fi hotspot service to GMC, Chevrolet, Buick, and Cadillac vehicles sold in North America. Fast-forward to the end of the first quarter this year—almost three years after the initial launch—and more than 5 million 4G LTE Wi-Fi-equipped vehicles can be found on the road, OnStar reports.

Earlier this year in March, GM and Jaguar Land Rover started to offer in-vehicle 4G LTE hotspot service with unlimited data for $20 a month through AT&T. Other major vehicle manufacturers including Audi, Ford, and Volvo quickly followed suit, and the reduced pricing is expected to contribute to wider adoption of Wi-Fi connectivity inside automobiles in North America.

Wireless smartphone projection technology using Wi-Fi

Smartphone projection refers to a smartphone integrated with an infotainment system and displaying projected icons on the touch screen. By using these icons, drivers can more safely access the smartphone applications through the head-unit while driving.

The two most popular projection applications currently in the market are Apple CarPlay, offered on the iPhone; and Google Android Auto, offered on Android-based smartphones. The majority of infotainment systems supporting Apple CarPlay and Android Auto currently require a USB cable connection. However, from late 2016, infotainment system vendors—first Harman, and then followed by Alpine Electronics—started to launch infotainment systems offering wireless smartphone projection technology.

Because all drivers still need USB cables to charge smartphones inside automobiles, the benefits from using wireless smartphone projection technology are currently limited. As a result, the technology is not expected to grow quickly in the near future. Even so, as more vehicles use wireless smartphone-charging technology in the next few years, wireless smartphone projection technology is expected to gain broader market acceptance, inasmuch as the combined technologies will allow drivers to eliminate the need to carry USB cables.

Adoption of V2V technology remains to be seen

Vehicle-to-vehicle (V2V) refers to an automobile technology designed to allow cars to communicate wirelessly with each other, so that vehicles in proximity—typically within 300 meters (about 985 feet)—can exchange information to warn of hazardous road conditions and to avoid collisions. In the United States, V2V technology is called WAVE, or Wireless Access in Vehicular Environment, which builds on IEEE 1609, a higher-layer standard that builds on the low-level IEEE 802.11P benchmark.

In December 2016, the US Department of Transportation proposed a rule mandating V2V communication on light vehicles by 2019. After Donald Trump was sworn in as the new US president a month later, however, he signed an executive order requiring that two active regulations be eliminated first before any new regulation could be issued. This signed executive order makes prospects very difficult for the proposed rule to mandate V2V communication to be issued in the near future, leaving industrywide adoption of V2V technology in the United States uncertain at this time.

Wi-Fi is here to stay

At present Wi-Fi hotspot service in automobiles continues to gain wider acceptance in the North American market, particularly in the United States. Meanwhile, wireless smartphone projection technology is expected to be the next promising Wi-Fi use case in automobiles a few years from now. But for V2V WAVE technology, industrywide adoption is yet to be seen.

Overall, however, it is clear that by adopting Wi-Fi technology in automobiles, vendors are able to offer high-performance applications, such as broadband internet and wireless smartphone projection, to drivers and passengers.

And as today’s automobile continues its transition into becoming a home away from home for both drivers and passengers in the future, Wi-Fi’s position as a key wireless standard in this transformation remains secure.

For more info:

Christian Kim is a Senior Analyst for IoT & Connectivity at the IHS Technology Group within IHS Markit
Posted 4 September 2017

World radiology & cardiology IT market remains highly concentrated

$
0
0

Just five companies in 2016 accounted for more than half of the total revenue of the global radiology and cardiology IT market as it continues to move toward integrated healthcare solutions, according to a new IHS Markit report on the subject. And while the replacement market in mature territories like the United States and Western Europe remains competitive, sociopolitical and economic prospects in various emerging markets are less promising and will inhibit growth.

The report, entitled “Radiology and Cardiology IT – 2017,” offers a close look at the global radiology and cardiology IT market’s various strengths, weaknesses, opportunities, and potential barriers. It evaluates trends sweeping the market, provides revenue forecasts through 2021, and supplies detailed information on the six largest sub-regional territories—the United States, the United Kingdom, Germany, France, China, and Japan—as well as on 49 sub-regional markets.

The small list of five companies dominating the worldwide radiology and cardiology IT markets indicates prevailing high concentration in the hands of a few. Together Chicago-headquartered GE Healthcare, Belgian-German conglomerate AGFA Healthcare, Philips of the Netherlands, Nashville-based Change Healthcare (formerly McKesson), and Fujifilm of Japan held 57% of the market’s total revenue of $2.8 billion.

Within the market, the largest segment is radiology picture archiving communication systems (PACS). An electronic imaging technology that eliminates the need for physical materials when storing or transmitting medical data, radiology PACS took in a whopping 78% of the total in global radiology and cardiology IT revenue. 

Among regions, the combined markets of North America and Western Europe represented more than three-fifths of worldwide revenue, but saturation is driving vendors to focus on differentiation in products and services. Meanwhile, a dearth in robust IT infrastructure and limited bandwidth is hampering growth in emerging markets in parts of Latin America, Africa, and Southeast Asia. The Asia-Pacific market, however, is projected to enjoy the fastest growth in both radiology IT and cardiology IT in the years to come.

The developments in the global radiology and cardiology IT space are part of a larger worldwide trend in which healthcare providers are adopting newer, more innovative, and integrated healthcare IT solutions. Providers are hoping to increase efficiency in operations while also attempting to address continually changing reimbursement models.

Moving forward, key market opportunities will lie with managed services to cater to user demand, as well as with scalable, lightweight solutions; lengthier and more comprehensive contracts; and integrated solutions that are interoperable with business analytics platforms. In the short term as providers look to replace siloed solutions, IHS Markit predicts that the interoperability of a solution will have the biggest bearing on procurement decisions.

For more info:

Nile Kazimer is an Analyst, Healthcare Technology, at the IHS Technology Group within IHS Markit
Posted 11 September 2017

Robotics, robots and technology: a simplified overview of a vast subject

$
0
0

In the past few years robotics has gone through astonishingly rapid development thanks to the overlap of various factors, including affordable high-powered computing, compact and mobile components, Big Data machine-learning, and low-cost 3-D manufacturing.

These technologies have led to a new wave of innovative robot designs. And where previously they would have been cumbersome, ineffective or dangerous, robots are increasingly utilized today in consumer and professional applications alike.

The global market for industrial robots

At the fundamental level, the development of current robotics—defined as the underlying technologies associated with robots—has less to do with the physical actuation and operation of devices. Instead, robotics development has more in common with computerized control and the development of machine autonomy. Both of these variables, in turn, are closely associated with increased machine perception via sensors, as well as with logical decision-making based on the recognition of patterns that is a hallmark of machine learning.

The difference, then, between many physical devices of similar form to current robotics lies mainly with two aspects. The first is that unlike other comparable devices, robots have the ability to provide feedback to their human controller, using sensors to create haptic—i.e., touch—or remote input. The second is that robots are able to process sensory data in order to make decisions on how to execute tasks, and in more advanced cases, be able to also define and elucidate the nature of the task to be performed.

Robotic intelligence

Intelligence in robotics is a conceptually complex. Even the task of defining intelligence is a matter of much debate encompassing other abstracts, such as understanding, learning, reasoning, and meaning. A quick review of most dictionaries easily finds circular definitions, such as understanding being defined as perceiving something, while perception is defined as the ability to understand something.

The difficulty when applying these abstract definitions to machine code is that we are too aware of the processes involved, which results in an outcome that would otherwise look like reasoning or understanding. So it becomes very easy to understate or dismiss machine intelligence as distinct and distant from organic intelligence, simply because we already understand the details of the process. If a human codes for the behavior, it cannot be truly intelligent.

This perception is being challenged by self-learning structures, such as the neural network used for machine learning. In these cases the intelligence is not programmed but learnt, and the learning is then applied to other similar tasks, further inducing learning. This process of generating intelligence is analogous to the way that organic intelligence develops through trial-and-error and then repetition, ultimately leading to innovative decisions—that is, decisions that are not predisposed.

Machine learning and deep learning

Perhaps the crucial change in recent machine intelligence is the development of machine learning—more specifically deep learning.

Machine learning uses large volumes of data to recognise patterns based on similar experiences. This method often makes use of neural networks, where probabilistic outcomes are combined to then make assertions about what a particular piece of information represents. The power of machine learning is that it can be applied to any source data, which then opens up the possibility for the rapid acceleration of machine intelligence. The current stage for most artificial intelligence is pattern recognition from large volumes of text or visual data, such as the facial-recognition algorithms on Facebook, the contextual recommendations of Google Assistant, or the natural language processing of speech by Amazon’s Alexa.

Recent machine learning systems use a process called deep learning, calling for algorithms to structure high-level abstractions in data via the processing of multiple layers of information. This occurs as machine learning tries to emulate the workings of a human brain.

Progress today in deep learning has been possible thanks to advanced algorithms, alongside the development of new and much faster hardware systems based on multiple graphics processing unit (GPU) cores instead of traditional central processing units (CPU). These new architectures allow faster learning phases as well as more accurate results.

Artificial intelligence

Artificial intelligence is a broad and poorly defined ideal—the imaginary and moving boundary at which the stages of machine intelligence overreach our current expectations of their capabilities.

At the edge of currently accepted AI definitions are recent achievements in the field, such as the understanding of human speech, the ability to win at highly complex games like the ancient Asian game Go, the ability to recognise faces and objects, and the ability to analyse and spot patterns in huge volumes of data.

Yet each of these is premised on a method that is still relatively explainable. These achievements, while remarkable, also lack a higher order of intelligence demonstrable in abstract, nonmaterial qualities like creativity, understanding, or self-awareness. In all likelihood, it is only a matter of time before these triumphs are considered mere computer know-how and not really AI.

At that point, our expectations will then move even closer toward abstraction, which we currently find harder to define.

Tom Morrod is Research and Analysis Executive Director for the IHS Technology Group within IHS Markit
Posted 18  September 2017

 

The strategic role of memory in the iPhone

$
0
0

With the official release of the iPhone 8 and 8 Plus on Friday (September 22), all eyes are once again on Apple to see how well the new smartphone fares in the market among consumers and the retail channel alike.

But while it is common knowledge that Apple rakes in substantial revenue for the crown jewel of its portfolio, less well-known is the role played by NAND flash memory as a profit engine and key contributor to Apple’s massive coffers, especially during the early iPhone years.

That starring role for memory may no longer be true today, as Apple has shifted its revenue model from a focus on hardware to one heavily reliant on services. Even so, the outsize impact of memory can be understood by analyzing the decade-long historical record behind Apple’s total bill-of-materials (BOM) cost for the iPhone.

At the IHS Technology Group within IHS Markit, the Teardowns & Cost Benchmarking Services research has built a deep and extensive database containing information on every single component that goes into the iPhone—and what it costs to make each component—to come up with a total BOM cost for every iteration of the phone.

That database goes all the way back to 2007 upon the release of the first iPhone and has continued to this day, with Teardowns announcing its most recent findings on the iPhone 8 in an official IHS Markit | Technology news release.

Memory’s place in the iPhone BOM

A study of the historical information in IHS iPhone teardowns reveals many intriguing details on memory as it relates to the iPhone BOM. Overall, the BOM cost is a major component in determining the cost of goods sold (COGS), itself an important measure in calculating the average selling price (ASP).

The IHS Markit graphic below shows the share occupied by memory in the iPhone’s total BOM cost throughout the l0 years that the device has been on the market.

iPhone Evolution of Design and Cost

The original iPhone in 2007 contained just 4 gigabytes (GB) of memory and was the second-most expensive component in the iPhone BOM after the phone’s display. At $48, the memory cost-per-gigabyte (cost/GB) figure for the 4GB iPhone came out to a sizable $12 per gigabyte on average.

By the following year, market forces had caused the cost of memory to drop significantly. Historical IHS Markit Teardown data shows that the BOM for memory in the iPhone 3G in 2008 amounted to just $16—a huge $32 drop from the previous year.

At the same time, Apple boosted the base memory capacity in the 3G to 8GB. Together those two factors—a plunge in the price of memory, along with a doubling of the iPhone’s memory—paved the way for a spectacular reduction in memory cost/GB.

From $12 in 2007, memory cost/GB was now just $2. By successfully curtailing its outlay on memory, Apple was also able to slash the iPhone BOM from more than $230 to approximately $177.

Memory as a profit engine

In the early days of the iPhone, memory occupied a seat front and center in Apple’s profit-generating strategy. This was no secret, as Apple capitalized on the decline in industry memory prices to cut its own memory BOM costs—all the while continuing to charge consumers a hefty premium via the iPhone’s lofty retail pricing.

Between 2009 and 2015, from the 3Gs to the 6s, the iPhone featured only incremental upgrades in storage capacity, as shown in the IHS Markit graphic below. Yet consumers could expect to pay up to an additional $100 in price premium to avail of the extra memory.

In fact, however, memory had a variable cost of less than 20% of the retail price premium, enabling Apple to reap handsome profits from the lopsided equation.

Apple's Profit Engine

Evolution to a new model

With hardware no longer a revenue driver, memory plays a different role today in the Apple profit playbook, especially as Apple has shifted focus to services as its new profit engine. 

Last year with the iPhone 7, the base memory configuration for the phone rose to 32GB, up from the base 16GB level that had remained in place for seven years from the iPhone 3Gs to the iPhone 6s. The larger base memory is no accident, instead representing for Apple a carefully calibrated mechanism that allows consumers to make optimal use of Apple’s own incredibly rich ecosystem.

By doubling the iPhone’s memory density, Apple is giving consumers a clear signal to utilize with confidence the vast storehouse of apps, music, books, and other media residing in the Apple App Store and iTunes. And should their phones run out of storage capacity, consumers are reminded that they can take advantage of fee-based Apple auxiliary services like the iCloud.

Offering users today a memory-boosted iPhone at the forefront of consumption, Apple is able to happily monetize each ensuing consumer transaction to its own economic benefit.  

For more information:

IHS Technology Group at IHS Markit
Posted 26 September 2017

 


First Apple Watch with cellular will benefit from a strong mobile signal

$
0
0

The release of the new Apple Watch Series 3 is making noise in the marketplace because it’s the first Apple Watch with built-in cellular connectivity. Apple promises Series 3 users the ability to “stay connected when you’re away from your phone.”

In practical terms, this means that Series 3 users can make calls, send texts, stream music, and more, entirely with their watch. Previous versions of the Apple Watch required an iPhone in close proximity for all wireless connectivity. The LTE version of the Apple Watch Series 3, however, allows users to switch to mobile carrier networks for wireless connectivity for usage when it’s not convenient to carry an iPhone.

A cellular smartwatch has a much smaller single antenna than that of a smartphone, and as a result will be less sensitive to weaker signals than larger mobile devices with antenna diversity. Apple has placed the antenna underneath the watch display, demonstrating tremendous innovation in keeping the slim form factor of Apple Watch. Existing cellular smartwatches such as Samsung Gear and LG devices rely on added volume to accommodate LTE antennas or embed them into the bands of the watch. Additionally, the Apple Watch Series 3 Cellular supports a smaller selection of mobile network bands, or frequencies, than modern smartphones.

The impact of these differences is that not all networks are created equal and not all Series 3 Cellular users will experience the same level of connectivity. Especially, at the cell edge, Apple Watch could encounter difficulty holding on to weak LTE signals, presenting a sub-par mobile performance relative to that of the companion iPhone.

Connectivity will differ, depending on carrier and location

While the Apple Watch Series 3 promises many benefits (particularly for those who want to stay connected while exercising), the ability to connect and the quality of connectivity will vary, depending on the mobile network associated with the watch, as well as a carrier’s performance in a particular location. In order to utilize the connectivity features of the Series 3, users must pay an additional fee to their mobile network, and the carrier must be the same as that of the user’s iPhone.

In situations where a user’s iPhone is not in close proximity to the watch, not all US carriers will deliver the same level of connectivity. With AT&T or T-Mobile, the Apple Watch Series 3 uses either LTE or 3G for connectivity. However, the Series 3 does not support the 3G technology of either Sprint or Verizon. Instead, the watch will only connect to LTE to make Voice over LTE (VoLTE) calls or send texts with Sprint or Verizon.

Within major metropolitan areas, this difference in connectivity among carriers might not make a noticeable difference. RootMetrics, an IHS Markit company, has measured mobile network performance under real-world conditions for many years and has collected hundreds of millions of test samples across the 125 most populated metropolitan markets in the US, within each of the 50 states, and more. In the first half of 2017, RootMetrics assessed the network technology capabilities of the four major US carriers during testing in metropolitan markets and across the 50 states. RootMetrics testing shows that all four carriers offer a strong LTE footprint within the 125 largest US metro areas.

Outside of metro areas, however, users could see different performance, with coverage gaps particularly important to users in rural areas. This is where provisioning for the Series 3 to access both 3G and LTE networks becomes more noticeable. AT&T users will be able to take advantage of both 3G and LTE networks with the Series 3 watch, and AT&T’s coverage in rural areas outside of metropolitan markets is over 90% on 3G and LTE combined. Verizon users won’t be able to access 3G, but Verizon’s LTE footprint is robust and covers over 90% of the rural and non-metro areas RootMetrics has tested.

The story shifts, however, when considering the potential impact on Sprint and T-Mobile. The LTE footprints of Sprint and T-Mobile aren’t as large as those of AT&T or Verizon outside of metropolitan markets. RootMetrics testing suggests that across the more rural areas outside of metros, both T-Mobile’s and Sprint’s LTE footprints are at around 67%. Since 3G support is not available for the Series 3 on Sprint’s network, this means there could be locations outside of metro areas where coverage becomes problematic for owners of the Series 3. T-Mobile users, on the other hand, will be able to take advantage of 3G service when LTE is not available. For T-Mobile users, LTE coverage plus 3G coverage (much of which is provided by an agreement with AT&T in rural areas) provides a footprint close to 82%.

The above scenarios will not affect the vast majority of Series 3 users, who are likely to either have their iPhone nearby and/or will be using their watch within metropolitan areas. And, for both Sprint and T-Mobile, any performance hiccups due to coverage should ease as their LTE footprints continue to expand into rural areas.

Keep in mind that RootMetrics testing figures are averages based on millions of test samples collected across the 125 most populous US metropolitan markets and throughout each of the 50 states. That said, those figures offer directional guidance for each network’s LTE capabilities. Real-world Apple Watch Series 3 cellular results will vary, depending on how good—or poor—each network’s service is in any given location, as well as on the real-world performance of Apple’s innovative embedded display antenna design.

With the growing number of devices connecting to mobile networks as more and more objects become smart, connected, and a part of the Internet of Things, the quality of mobile network coverage becomes even more important. The proliferation of connected devices, including IoT devices like smartwatches, smart meters, eReaders, and more, require strong network connectivity in order to ensure a good consumer experience.

To learn how the carriers performed in specific US metro areas, within each of the 50 states, or across the US as a whole, view the RootMetrics series of RootScore Reports, which characterize network performance under real-world mobile usage conditions.

IHS Technology at IHS Markit
Posted 27 September 2017

Achieving video business success for telcos

$
0
0

An event the scale of Amsterdam’s annual International Broadcasting Convention (IBC) tradeshow—this year attended by a record 57,669 industry professionals—tends to leave no stone unturned in its demonstration and examination of the latest technology shaping the TV and video industry. Conversations with vendors about their latest wares and the direction of the industry are at the core of the event, but less common are discussions with executives on the operator side of the business about the issues they are facing. IHS Markit was, therefore, pleased and proud to be the sole analyst firm present at a special behind-closed-doors roundtable of pay-TV video executives organised by China’s Huawei Technologies.

On hand to discuss the topic of video business success for telcos were representatives from a diverse mix of major pay-TV operators from some of the largest markets in Europe, Latin America, and Asia. Given the privilege of setting the scene for an extended discussion, IHS Markit presented its analysis on the evolution of the telco video business. As part of our presentation, we identified the 50 leading telco video groups by subscription and the performance benchmarks they have set.

IHS Markit top 50 telco video providers

IHS Markit top 10 telco video providers

Furthermore, we categorised, examined, and detailed the winning strategies that have characterised the success of these telcos.

 

IHS Markit routes to success for telco video operators

The analysis presented at IBC, and briefly outlined above, represents a preview of forthcoming deep-dive research to be published in a white paper, Video as a Core Service for Telcos: Analysis of 50 Leading Operators in Achieving Video Business Success, in November 2017.

Opportunities and challenges for telco video operators

The insights shared by operators in the roundtable discussion shed light on various aspects of the telco video business, including cause for optimism as well the challenges faced in the changing landscape.

One consensus—a positive—was that telcos, as providers of access and not simply content alone, are well-placed to retain their position in the multiplay market. As long as consumers want digital content—whether video, music, or online games—they will need to pay for access to networks in order to get it. And, echoing the view of IHS Markit, the operators in attendance generally saw video as a core service that represents the most marketable hook for attracting customers to their broadband and mobile offerings. Indeed, it was video’s indirect contribution to the broader telco business, in terms of its role as a bundle draw, that was highlighted by some as being more important than the revenue generated in isolation by videos.

Some operators are being more aggressive than others in making video a core service for their customers, pushing alternatives to traditional pay TV in the form of flexible online video offerings bundled with mobile and/or broadband. Many are even completely unbundling access to video via standalone online services, to ensure that they have as many customer relationships as possible for ongoing cross-selling and upselling efforts—again, underlining video’s importance as an indirect revenue generator.

Assessing shifts in the content rights landscape, the operators did not feel that fragmentation would threaten their strong position as video aggregators. With the likes of Disney unbundling their channels via dedicated direct-to-consumer (D2C) apps, operators will still be needed as distribution partners for these offerings, it was argued. Such a scenario points to the rise of a new kind of carriage deal, in which operators strike agreements to carry on-demand video apps instead of linear channels, much like they have with Netflix, itself a channel-like online video service.

However, in spite of content owners’ ongoing need for telco distribution partners, the D2C trend is still, in the IHS Markit view, a somewhat worrying trend for those operators when it comes to their pay-TV ambitions. A wider, a-la-carte distribution of key subscription video content on alternative platforms (e.g., those of Amazon, Google, Roku, and others) has the potential to marginalise traditional pay TV and undermine its appeal.

As to challenges, one area in which operators agreed they needed to improve is customer data—in better using what they have, as well as in gathering more of it. One executive admitted that after prioritising things like product strategy or customer acquisition and retention, collecting and analysing data was the last thing his company thought about—yet was what they needed to give more attention to, in order to unlock its value.

One problem, though, is the lack of access to data for operators when they work with partners such as Netflix and YouTube. This is because their visibility ends once the customer enters the app, at which point operators do not know how such customers are using the services. This highlights one of the challenges of providing access to third-party apps, compared to linear channels and in-house on-demand services.

While the operators at the roundtable expressed a willingness to work with online content services that either behave like channels (Netflix) or aggregate semi-professional multi-channel network (MCN) content that is not widely carried by traditional TV (YouTube), they were more wary of online aggregators moving into professional programming—specifically, Facebook.

The social network’s new Watch service, which has ambitions of hosting long-form, 30-minute TV shows, was identified as posing a threat to pay TV, by providing an alternative platform for operators’ traditional content partners. With Facebook, Twitter, Snap, and others ramping up their video growth strategies, they will become increasingly bandwidth hungry.

This might be good news for operators from a broadband and mobile perspective, but is less positive for their video business.

Ted Hall is Research & Analysis Associate Director, Television, within the IHS Technology Group at IHS Markit
Posted 3 October 2017

Interventional X-ray systems to see growth in demand worldwide

$
0
0

The increased prevalence of ailments such as cardiovascular disease and stroke is fuelling demand worldwide for interventional X-ray systems, a medical specialty providing image-guided minimally invasive diagnosis and treatment in order to lessen patient risk.

Caused by an ageing population and behavioural risk factors, cardiovascular disease is the leading cause of death, according to the American Heart Association, accounting for more than 17.3 million deaths in 2013. That number is expected to rise to more than 23.6 million by 2030.

The second leading global cause of death behind heart disease in 2013 was stroke, also a cardiovascular disorder, accounting for 11.8 percent of all deaths worldwide.

To this end, demand will grow for interventional X-ray systems, IHS Markit believes, especially as the systems are harnessed in support of various interventional cardiology procedures, such as transcatheter aortic valve implantation (TAVI), transcatheter aortic valve replacement (TAVR), percutaneous coronary intervention (PCI), and aortic abdominal aneurism (AAA).

Overall, interventional cardiology procedures are being performed more often today because of increased reimbursement by health providers, along with a greater awareness among both medical practitioners and patients of the clinical benefits to be derived. Interventional procedures are also now easier for physicians to perform, helping to reducing patient risk.

IHSMarkit chart of interventional X-ray manufacturers continuing to tailor interventional X-ray systems

In the case of strokes, with the number of cases also growing worldwide, the choice of mechanical thrombectomy as a less aggressive mode of treatment will benefit the interventional neurology market. Here demand is projected to increase over the next five years, with angiography systems preferred, to visualize and guide thrombectomy. These systems need to provide not only uncompromising image quality while having minimal interference with the interventional procedures, but also precise and flexible positioning control for interventional X-ray systems.

In the United Kingdom, more stroke patients will be able to access mechanical thrombectomy as plans call for the treatment to be rolled out to 8,000 stroke patients a year—facilitated by the huge expansion in the number of hospitals offering this procedure, compared to the few that offer it today. As a result, thousands of stroke patients will be saved from lifelong disability.

With the rise in demand for mechanical thrombectomy comes a need to further drive innovation in comprehensive imaging capabilities. And as developments in the field continue, even more complex procedures can be performed. In turn, vendors can refine their devices accordingly, enabling greater ease and safety for better patient outcomes.

New interventional procedures also have a role

Several novel interventional procedures can also be performed as an alternative to drug treatments historically administered as part of the patient care pathway.

For instance, atrial fibrillation—one of the most important risk factors of stroke caused by blood clots that form in the left atrial appendage—affects 33.5 million people globally. Untreated, it can cause 15% of the total number of strokes. While the conventional treatment and prescription for this condition is blood thinners, not all patients can be treated successfully through this method. But because of improved interventional cardiology, it is now possible to perform a minimally invasive interventional procedure known as left atrial appendage closure (LAAC), which seals off a small sac in the heart where blood clots have a tendency to form. Clinical demand for LAAC procedures is likely to take root first in North America, thanks to ongoing innovations in the imaging capabilities of interventional cardiology X-ray systems.

Overall, new technological developments in interventional X-ray systems are allowing more complex procedures to be performed. Interventional suites should, therefore, include tailored features that match the therapeutic requirements of the interventional imaging technique and the skills of the interventional team.

And as patient cases grow in complexity and become more challenging, the need will arise for enhanced visualisation and image quality from interventional X-ray equipment vendors, ensuring that cases are handled safely and efficiently.

For more information on this subject, see our Interventional X-ray equipment 2017 report as part of the X-ray Intelligence service.

Bhvita Jani is an Analyst, Healthcare Technology, within the IHS Technology Group at IHS Markit
Posted 9 October 2017

Grid defection is attractive consumer alternative to reduce energy costs

$
0
0

On the back of rapidly decreasing costs for energy storage and solar photovoltaics (PV), consumers wishing to achieve a low-cost and reliable supply of power are considering grid defection—or at least, partial grid-defection—as an increasingly attractive alternative. But while lower energy costs are certainly welcome news for end customers, the same cannot be said for incumbent utilities, many of which will be facing new challenges that call into question their long-standing business model of selling and distributing electricity.

To be sure, grid defection is a term used largely in North America. Yet it is in Europe where the residential solar and energy storage markets have enjoyed greater penetration over the past few years. For instance, Germany alone already has more than 60,000 residential battery storage systems installed today, compared to less than 10,000 grid-connected residential energy storage systems in the United States.

This article, then, will focus specifically on the economics of grid defection in Europe today and in the future, and will also examine how such a development could impact the energy industry.

Grid defection: a competitive tool for customers

In Europe the economics for going off-grid are improving, but practical considerations make grid defection unlikely to be a major disruptor to existing traditional and entrenched systems of consuming power. Even so, partial grid defection—especially a scenario in which PV is coupled with a battery energy storage system (BESS)—is seen by customers as an increasingly attractive option to hedge against future rises in retail power prices.

As part of the latest report published by the IHS Markit Energy Storage Intelligence Service, we carried out in-depth modelling using hourly demand profiles to compare the levelized cost of energy (LCOE) under four scenarios: future retail rates without any onsite generation; a solar PV system alone; a combination of solar PV and battery energy storage to maximise self-consumption; and a full grid-defection scenario equivalent to 100% self-consumption. The termLCOE, whichcompares the relative cost of energy produced by different energy-generating sources,is often touted as the new metric in evaluating cost performance of PV systems.

The modelling showed that:

  • Partial grid defection using solar PV alone is already competitive with retail rates in the four countries analysed—namely, Germany, Italy, the Netherlands, and the United Kingdom.
  • When analysing the LCOE offered by a combined PV and battery storage solution, IHS Markit shows that this case has already reached parity with retail rates in Germany and Italy, and will do so in the United Kingdom by 2023.
  • Complete disconnection from the grid will not become economically attractive until at least 2025, IHS Markit expects. 
  • For battery storage, the so-called sizing sweet spot that is also the most economically attractive lies between 5 and 8 kilowatt-hours in energy capacity for the BESS

Regardless of the economics involved, we do not expect a scenario in which customers are fully disconnecting from the grid or achieving 100% self-sufficiency to become reality in Europe. This is due to both the technical impracticalities stopping such a scenario from taking root, and also because a widely stable electricity grid is already in place that makes complete grid defection unnecessary.

Nonetheless, IHS Markit expects that more than a million residential battery energy storage systems will be installed in Europe by 2025, driven primarily by customers looking to take control of their energy supply and to hedge against future price uncertainty.

New business models offer virtual solution toward 100% independence

As customers look to minimize reliance on the grid as much as possible, they need either active demand-side management or an alternative to consider new business models. A number of players, including the likes of Sonnen, E3DC, or Beegy, have started to act as electricity suppliers, offering customers the capability to achieve so-called virtual independence, through the mechanism described below.

  • In the most common scenario, customers will feed excess solar power into the grid, but the solution provider will meter the electricity and add it to the customer’s account. During times when the customer is unable to generate enough solar power, they receive this electricity back from the supplier.
  • Alternatively, some suppliers are offering flat-rate electricity tariffs. Here the customer—with solar and storage installed—pays a fixed fee, which covers all electricity demand that is not self-produced (regardless of consumption levels).
  • Other players offer Virtual Power Plant solutions, which provide additional revenue to customers by aggregating distributed storage systems to participate in ancillary services markets.

The challenges of decentralised generation and self-consumption

Irrespective of the extent to which customers defect from the grid, or whether they choose new virtual self-consumption models, the shift toward the so-called energy prosumer fundamentally challenges the incumbents—electricity suppliers and network operators alike. In fact, the share of self-consumed power as part of total electricity demand is expected to double by 2020.

Still, IHS Markit predicts that utilities will play a major role in the growing energy storage market. Already, 7 out of the 10 biggest electricity suppliers in Europe, such as E.ON and EDF, are already commercialising propositions related to behind-the-meter energy storage. 

Personally I believe the trend toward grid defection in order to increase energy independence won’t stop. We see strong economic and emotional drivers supporting this shift. At the same time, utilities are clearly starting to understand that the energy landscape is shifting around them, and that they will be able to play a crucial role—not only in commercialising new business models, but also in binding customers through energy storage.

For more information on this and related topics, visit our Energy Storage Intelligence Service.

Julian Jansen is Senior Analyst, Solar, at the IHS Technology Group within IHS Markit
Posted 24 July 2017

Sprint and T-Mobile combined: the implications of a merger

$
0
0

The rumored and much-discussed merger between Sprint and T-Mobile would have myriad ramifications for both consumers and mobile carriers alike, creating a company with nearly 130 million subscriptions—a similar scale to that of market leaders AT&T and Verizon. Combined, Sprint and T-Mobile would have the scale and resources to provide stronger network competition to the more extensive networks of AT&T and Verizon. Consumers in both rural and urban areas should experience improved coverage and better mobile performance when the deal is finalized and the network operations are integrated. However, the deal is not as straightforward as it may sound.

While both carriers operate 4G LTE networks, Sprint and T-Mobile operate different 3G network technologies and own very different spectrum holdings. Combining the network assets and spectrum allotments of each network will likely prove a complex and time-consuming undertaking. That could potentially undercut the momentum that T-Mobile has managed to build since the failure of a merger with AT&T in 2011 which handed T-Mobile a breakup fee worth $4 billion, including $3 billion in cash, a spectrum access deal worth an additional $1 billion, and a seven-year roaming agreement. Since then, T-Mobile has more than doubled its subscriber base in a market that has grown by just 25%. Despite often drastic (and expensive) marketing moves, Sprint, acquired by Japanese operator Softbank in 2013 with an intent to merge with T-Mobile, has seen subscription growth stagnate alongside significant declines to revenues and losses in profitability.

Sprint and T-Mobile users in rural areas may benefit the most from the combined network through improved coverage and the resources to invest in the increasingly dense networks that will support 5G. Read on to see what RootMetrics, an IHS Markit company, believes this merger would mean for consumers.

Device support

With Sprint using CDMA for their 3G network and T-Mobile using the GSM/UMTS standard, Sprint and T-Mobile currently sell and support specific devices that work on their own networks, including iPhones in which chipset providers differ (Intel vs. Qualcomm, for example) depending on the network. Current Sprint devices will not have the same experience as a current T-Mobile device on the combined network, and vice versa. That may lead to increased churn as users find that service degrades on the less favored network (likely to be Sprint’s CDMA) as the networks are integrated.

In order for users to take full advantage of the merged network’s services, once the merger is complete and a network integration plan is clear, OEMs such as Samsung, LG, Apple, and others must begin producing devices that support the features and frequencies of the combined Sprint and T-Mobile network. It’s important to note that many current Sprint and T-Mobile devices will still be able to take advantage of certain features of the combined network, and users will not necessarily need a new device.

Coverage

Both carriers have strong coverage in urban areas and metropolitan markets across the US. Indeed, neither Sprint nor T-Mobile exhibited any significant domestic roaming during RootMetrics testing across the 125 most populated metro areas in the US in the first half of 2017.

The current Sprint and T-Mobile networks both utilize low-band spectrum, which provide better signal penetration for challenging in-building locations, as well as strong coverage over large distances. T-Mobile utilizes 700 MHz spectrum and is currently deploying its 600 MHz spectrum widely, while Sprint uses its 850 MHz band spectrum.

In terms of providing coverage outside of metropolitan markets, neither carriers’ coverage encompasses as wide an area as that of AT&T or Verizon. RootMetrics testing across each of the 50 states (outside of metro areas) observed that Sprint and T-Mobile often roamed on their competitors’ networks domestically. During RootMetrics state testing, Sprint roamed on competitor networks approximately 27% of the time, while T-Mobile’s network roamed at a rate of 26%. The RootMetrics tests were performed with the most current consumer devices and represented the real-world consumer experience of using data, call, and text services.

Once the merger takes place, RootMetrics expects that the new combined network will directly compete with AT&T and Verizon for coverage in rural areas, rather than roaming on competitors’ networks. However, in order for the combined network to compete effectively with AT&T and Verizon, many new towers will need to be deployed utilizing low-band frequencies (600/700/850). See the maps below for a look at how prevalent domestic roaming was for Sprint and T-Mobile during RootMetrics testing of the 50 states in the first half of 2017, as well as for a look at how roaming is affected when the networks of Sprint and T-Mobile are combined.

Sprint roaming map (RootMetrics State testing, H1 2017)

T-Mobile roaming map (RootMetrics State testing, H1 2017)

 

Sprint and T-Mobile combined roaming map (RootMetrics State testing, H1 2017)

    

As these roaming maps suggest, the combined Sprint and T-Mobile network will eliminate a large portion of domestic roaming, down to roughly 16%. In some areas, however, new towers will be required in order to provide coverage. It’s also worth noting that T-Mobile is currently in the process of deploying its 600 MHz band spectrum widely across the US. The implications of this deployment suggest coverage would increase from 315 million to 321 million on a population basis, but the difference in coverage on an area basis would be more significant.

Spectrum assets

Sprint is well known for its rich spectrum assets, particularly on the 2.5 GHz TDD spectrum. Sprint is effectively using that spectrum for 3-carrier aggregation (3CA) on its LTE network, which allows high-peak downlink speeds to be achieved. Sprint has also been deploying its 2.5 GHz TDD spectrum for wireless back-haul purposes, which should give Sprint a distinct advantage over other carriers by eliminating reliance on wired networks and reducing backhaul costs. This spectrum, which would have been considered high band in previous generation deployments, is also well suited to future 5G deployments, which are set to be increasingly dense and used with higher spectrum bands. Furthermore, Sprint owns frequencies in the 850 MHz and 1900 MHz FDD bands, which help with both in-building and rural coverage.

T-Mobile, meanwhile, currently has spectrum holdings in the 600 MHz, 700 MHz, 1900 MHz, and AWS bands. And as noted above, T-Mobile is currently in the initial stages of deploying its 600 MHz spectrum across the US.

In short, the combined networks’ spectrum assets will be both deep (large bandwidth) and varied (low and high bands) and should be able to effectively compete with AT&T’s and Verizon’s spectrum assets for use with 4G LTE, 5G, and IoT.

Network integration

Integrating networks has not always gone smoothly. For perspective, consider the Cingular Wireless and AT&T Wireless Services (AWS) merger that occurred in late 2004. That merger was much simpler than what we expect to see between Sprint and T-Mobile. Cingular and AT&T had similar network infrastructures; both were GSM-based, but integrating the two networks ultimately took several months to complete.

Sprint and T-Mobile each have much more complicated networks today compared with the seemingly “simple” Cingular and AT&T Wireless networks of 2004. As one example of the differences between Sprint and T-Mobile, consider that T-Mobile supports VoLTE while Sprint carries its voice traffic on 1xRTT. Moreover, T-Mobile is GSM-based and Sprint is CDMA-based, and the two carriers do not always share the same infrastructure vendors in the same regions of the country.

In urban and suburban areas, both carriers’ coverage has significant overlap. The integration of coverage, including tower-by-tower decisions on which towers to keep or which to forfeit, will be a lengthy process that will likely span several months. Furthermore, both companies have different approaches to in-building, small cells, and IoT coverage that must be aligned.

On the other hand, earlier mergers have provided valuable lessons that Sprint and T-Mobile might draw upon. Learning from the example of Canadian and Korean operators as they prepared their networks for 4G, Sprint and T-Mobile might decide to shut down EV-DO, leave voice on RTT for a while if necessary, implement HSPA in the EV-DO band and then bridge it to LTE. This would simplify the integration process.

There is also the possibility that both networks are simply maintained separately. Both T-Mobile and Softbank have experience running disparate networks and could choose that same path after the merger. For instance, in Japan Softbank runs three brands with three different spectrum assets: Softbank Mobile uses W-CDMA and FD-LTE, WCP runs on TD-LTE, and Y!Mobile takes advantage of both TD-LTE and FD-LTE.

Expected outcome

RootMetrics expects that the new company and combined network would be able to compete more effectively with Verizon and AT&T on several fronts, including delivering better coverage in rural areas than either Sprint or T-Mobile alone, improved performance in both urban and rural environments, strong nationwide IoT coverage, and a more seamless deployment of 5G technology.

IHS Technology at IHS Markit
Posted 12 October 2017

 

Changing healthcare patterns to impact medical imaging equipment market

$
0
0

A new IHS Markit report shows that the global landscape for medical imaging equipment like X-ray and ultrasound will be affected during the next few years by various factors, including aging populations and projected changes in the healthcare spending of important territories.

Worldwide revenue for the medical imaging equipment market is forecast to reach $24.0 billion in 2020, up 11.7% from $21.2 billion in 2016, as shown in the graphic below. X-ray and ultrasound equipment make up the largest segments of the medical imaging equipment space, which also includes magnetic resonance imaging (MRI) and computed tomography (CT) systems.

Forecasts and extended analysis of both overall market and individual imaging segments can be found in the report “Medical Imaging Executive Overview,” which collates information from the IHS Markit medical imaging report library to present a summary view of the complete medical imaging market.

Consequences of an aging population

Among the most salient factors expected to impact the medical imaging equipment market is the increasing number of an aging population in all of the world’s regions. Those aged over 60 will account for 22% of the global population by 2020, compared to 12% in 2015, according to the World Health Organization. The aging population will lead to a rise in the rate of many chronic diseases, straining healthcare systems in the process and requiring medical imaging equipment to provide high-quality care.

In many countries, however, funds intended for medical imaging equipment are being diverted toward telehealth systems and related services, negatively affecting the imaging market. Also impacting medical imaging budgets is consolidation in the industry, which is taking place in response to continued pressure to eliminate overhead and duplication of services.

A new focus for healthcare providers is procuring new imaging products that bear a higher return-on-investment (ROI), as new technological advances in the field promise simplified workflows in tandem with more advanced software and faster image capture. For example, handheld ultrasound systems could offer better ROI than stationary ultrasound systems, especially as the former confers mobility on providers to conduct patient examinations in multiple hospital locations. 

Healthcare spending worldwide is also changing

An important factor that will shape the medical imaging equipment market will be the change in healthcare expenditure patterns sweeping across the world.

In the United States, which accounted for 25% of total market revenue in 2016, continuing intense debate on the country’s healthcare system is sure to affect the industry’s global prospects down the road. Within the US, imaging equipment will see lower utilization, longer replacement rates, and less equipment being purchased overall.

The depressed outlook on the US market will be alleviated in part by the expected growth in healthcare spending in developing countries, which will lead to increased uptake of basic medical imaging equipment. Last year the Asia-Pacific region represented 40% of global revenue and 43% of global shipments—a strong showing expected to continue in the next few years. In particular, China—the world’s second-largest medical imaging equipment market after the US—will continue to drive investments in the space until at least 2020, IHS Markit projections show.

Call to action

For manufacturers to stay competitive, IHS Markit recommends a course of action that includes more so-called work-horse systems in product portfolios, along with establishing an increased presence in growth markets.

With their capability for multi-purpose use in multiple environments, work-horse products and solutions can boost the operational efficiency of medical imaging equipment while also maximizing ROI. Meanwhile, manufacturers can also stand to benefit from a stronger sales presence in promising growth markets such as Asia-Pacific, Latin America and the Middle East.

For more info:

Holley Lewis is an Analyst, Healthcare Technology, at the IHS Technology Group within IHS Markit
Posted 23 October 2017

Mipcom TV market invokes power of scripted TV series

$
0
0

Mipcom, which ran from 16-19 October in Cannes, France, is the major gathering place for buyers and sellers of TV programming. This year’s market and conference highlighted TV drama programming, which is rising high on a wave of investment from online platforms and premium pay-TV channels and others. The growing influence of online platforms was also apparent, with Facebook presenting its video strategy and Snap announcing an agreement with NBC Universal.

MIPCOM

On the drama front, there were premiere screenings of new dramas from Telefonica, Sky, and Germany’s Beta Film. While Sony Pictures showcased a new 10-part drama called “Counterpart,” and all the US studios were present (Disney no longer goes to the other Mip event in April), the US was somewhat overshadowed by its European counterparts.

Mipcom’s organisers Reed Midem announced a new event called Canneseries, which will take place in Cannes from 4 April next year and will last a week. With a budget of €4 million, the event will include screenings open to the public, and an official competition presided over by an international jury will present 10 original series. An awards ceremony will close the event on 11 April, to be broadcast live on Canal Plus.

At this year’s conference, HBO chief executive Richard Plepler spoke about the US premium channel’s decision two years ago to launch its direct-to-consumer service, HBO Now. This year, claimed Plepler, has so far seen the biggest subscriber growth in HBO’s history. International rollout is taking shape in three ways: the launch of linear channels, output ‘home of HBO’ deals in 17 countries, and HBO Now. Plepler announced plans to launch a single global direct-to-consumer platform in the first half of 2018, adding that HBO plans to increase investment in original content outside the US and to offer a selection of these to US viewers on an on-demand basis.  

Other signs that the axis of the global TV business has shifted slightly could be picked up in conference sessions. Middle Eastern drama producers spoke about the phenomenal success of “Al Hayba,” a drama produced in Lebanon, throughout the Arabic-speaking world. Producers are now setting their sights on wider exports, with many holding up the Netflix series “Narcos”—filmed in Spanish and English—as an example of the way to go. In another session, Turkish drama producer Inter Medya said it plans to make drama shows with fewer and shorter episodes in order to boost sales in parts of the world like Western Europe.

An appearance at the conference by Facebook’s video production director and head of creative strategy indicated how the social media platform is planning to develop the role of video. Facebook said that 50% of all mobile data traffic is going to video, an amount that would increase to 75% in the next few years. It was noticeable that all of the video Facebook presented in its session was displayed on a vertical, mobile screen. While Facebook Watch (launched only in the US so far) is only a few weeks old, the video service is to be rolled out internationally, with its first scripted commission—a version of the Norwegian show “Skam—to be co-produced with Simon Fuller.

The rise of social video was reinforced by Sean Mills, director of content for Snap Inc., who announced a joint venture with NBC Universal for scripted shows. Mills highlighted how video viewing behaviour on the platform is intrinsically different from the TV set as its camera functionality encourages users to create and engage with content rather than the traditional “sit back and consume” approach. As with Facebook, a mobile-centric approach is evident from the focus on portrait videos and split screens. Storytelling on social video is also unique, emphasising the importance of engaging content and pacing in delivery. The director acknowledged that despite growth in mobile viewing, the platform remains a somewhat new medium that is often used in conjunction with the bigger screen rather than working as a substitute.

Netflix, which announced plans to increase its investment in content to $8 billion next year at its latest quarterly results, was also in evidence at Mipcom, announcing its first original children’s productions in India and South Korea. Andy Yeatman, kids and family director, said that half of the 104 million homes subscribing to Netflix watch children’s content every week. Viewership of children’s content outside the US overtook domestic viewing in the last quarter, he added.

As a global platform, Netflix poses a challenge to the traditional, territory-based model of TV programming, which is key to the existence of get-together events like Mipcom. The programming buyer for one European public broadcaster was unhappy that international rights for two new series presented by US studios at this year’s LA Screenings had already been sold to Netflix. Rights holders are also exploring cheaper ways of selling programming; Sky and Channel 4 are among the investors in TRX, an online platform for buyers and distributors.

This year a total of 13,900 delegates, including 4,800 programme buyers, attended the event, according to organizers. Both numbers were slightly lower than attendance last year.

Tim Westcott is Director, Research & Analysis, Programming, within the IHS Technology Group at IHS Markit
Also contributing to this piece is Fateha Begum, Principal Analyst, Telecom Operator Strategy, within the IHS Technology Group at HIS Markit

Posted 1 November 2017

 


Oxide displays for mobile PCs to see hefty growth in year-end tally

$
0
0

Surging on a wave of demand, the market for oxide displays used in mobile PCs is headed for spectacular growth this year.

Shipments of oxide TFT-LCD panels sized 9 inches and larger intended for the mobile PC market, which includes the notebook PC and tablet PC segments, are forecast to reach 50.6 million units by the time 2017 finishes. Those numbers represent a resounding 194% growth from shipments of 17.2 million units last year, according to the IHS Markit Large Area Display Market Tracker.

The huge growth is due to a big bump in demand from manufacturers incorporating the displays into more notebooks and tablets, thanks to the various benefits bestowed by oxide technology.

All told, the mobile PC segment will account for 91% share of total shipments in the oxide display space, which also includes the segments for monitors and LCD TVs.

Oxide TFT LCD panel shipments from IHS Markit

Oxide display technology provides several benefits, including high resolution, lower power consumption than traditional mechanisms like amorphous silicon (a-Si), and lower production costs than in low-temperature polysilicon (LTPS).

Moreover, oxide offers higher electron mobility at 20-30 cm²/Vs, compared to less than 10 cm²/Vs for a-Si. A higher electron mobility paves the way for the use of smaller transistors, in turn enabling reduced power consumption, improved performance in refresh rates, and lower noise in touch-screen sensitivity.  

Last, the high resolution of oxide TFT LCD means the displays are well-equipped to handle high-resolution content. Increasingly, such content is being produced in greater quantity given the continuing technological advances in smartphones and TVs, with mobile PCs also gaining in the process.

All these benefits are important to the makers of mobile PCs, eager to seize on any advantage that can be obtained to stimulate a stagnant market.

Advantages and drawbacks of each technology

In comparing LTPS, a-Si, and oxide, one can discern the pros and cons in each technology.  

For instance, LTPS can deliver higher-resolution images and consume less power than other solutions. However, its drawbacks include high production costs and low productivity; it is also less efficient in producing large-sized panels. For their part, oxide TFT LCD panels are inferior in resolution and power consumption to LTPS displays, but they are superior to a-Si TFT LCD panels. Oxide is also suitable for large-sized panel production, and the production cost of oxide displays is lower than that of LTPS counterparts.

Because competition is so intense, cost is an important consideration for PC brands.  While LTPS is a perfect technology for high resolution, low-power consuming displays, the process is too complicated and the production costs too high.

Comparison of TFT specifications from IHS Markit

Important market drivers

Several factors account for the strong demand in oxide mobile panels, including the following:

  • Apple and Microsoft both use oxide TFT LCD displays for their tablet devices, and their example has encouraged other brands to do the same. Apple devices include the 9.7-inch iPad with a resolution of 2048 × 1536 pixels per inch (ppi);  the 10.5-inch iPad Pro (2224 × 1668 ppi); and the 12.9-inch iPad Pro (2732 × 2048 ppi). Microsoft devices include the 12.3-inch Surface Pro (2736 × 1824 ppi).
     
  • Apple has also been using more oxide panels for its MacBook series, following the footprint of the iPad Pro.
     
  • Producing an oxide TFT LCD requires a surplus photomask, but not as many as that needed to produce an LTPS display. This helps panel makers enhance their yield rate and production stability when it comes to oxide. To this end, panel makers such as LG Display, Sharp, and CEC Panda have been increasing their shipments of oxide panels.
     
  • In the mobile PC segment, a rising trend is the 2-in-1, in which a laptop and a slate-type tablet converge to form a multipurpose device; the form factor is a foldable or switchable keyboard with a PC. Here facilitating dual usage— the notebook laptop for efficiency computing, and the tablet PC for casual computing—requires a high-resolution display with a low frequency in order to be switchable. System stability is, therefore, important. Moreover, the required ppi spec for a 2-in-1 PCs require a resolution range of from 200 to 280 ppi, for which oxide is especially good. Examples are the 10.5-inch with 2224 × 1668 ppi; the 12.3-inch with 2736 × 1824 ppi; and the 12.9-inch with 2732 × 2048 ppi.
     
  • For some notebooks and laptops in which a combination of high resolution and lower power consumption is critical, oxide has become the chosen technology. Examples here are the 13.3-inch with 2560 × 1600 ppi; and the 15.4-inch with 2880 × 1800 ppi.
     
  • Some panel makers, such as CEC Panda, are also driving the oxide TFT LCD market by shipping more oxide PC panels on their own initiative. Examples here include the 13.3”-inch with 1920 × 1080 ppi; and the 15.6-inch with 1920 × 1080 ppi.

The following panels are currently driving oxide panel shipments:

  • 9.7-inch, 2048 × 1536 ppi (iPad Pro)
  • 10.5-inch, 2224 × 1668 ppi (iPad Pro)
  • 11.6-inch, 1920 × 1080 ppi (Lenovo ThinkPad Helix Ultrabook)
  • 12.3-inch, 2736 × 1824 ppi (Microsoft Surface Pro)
  • 12.9-inch, 2732 × 2048 ppi (iPad Pro)
  • 13.3-inch, 1920 × 1080 ppi (Acer V3 and Dell Chromebook)
  • 13.3-inch, 2560 × 1600 ppi (Apple MacBook Pro)
  • 15.4-inch, 2880 × 1800 ppi (Apple MacBook Pro)
  • 15.6-inch, 1920 × 1080 ppi (Lenovo ThinkPad E570)
  • 15.6-inch, 3840 × 2160 ppi (4K2K) (Dell, Asus, Lenovo, Toshiba)

LG Display and Sharp are expanding their oxide mobile PC panel shipments the most aggressively. For its part, CEC Panda will increase shipments from 0.6 million units in 2016 to 4.2 million in 2017. 

David Hsieh is Director of Analysis & Research, Displays, within the IHS Technology Group at IHS Markit

Posted 6 November 2017

 

The Industrial Internet of Things is here, but widespread adoption remains elusive

$
0
0

For many people, the term “Internet of Things” (IoT) is, by now, a familiar turn of phrase—a catchy expression that refers to the vast universe of electronic devices and other objects embedded with sensors and software, all connected via networks, enabling the collection, exchange, and analysis of data. A subset of IoT is the Industrial Internet of Things (IIoT), which utilizes the same concepts underpinning IoT but applies on a much bigger scale to industrial entities and processes.

One way to differentiate the two is through their area of focus. Consumer IoT segments are applicable to personal devices, such as smart home systems or wearable electronics. In comparison, the IIoT governs the realm of machines and sensors in industries, often those of critical nature where the failure of systems may result in larger disruptions; or conversely, whose efficient functioning can bestow widespread benefits on both the entity and customers of the particular IIoT process or solution. A breakdown in IIoT utilities, for instance, could presage widespread power blackouts; while a smooth IIoT operation could translate into a lifesaving mechanism, such as when it allows for the seamless monitoring by hospitals of a patient’s vital signs during surgery.

To date, an increasing body of evidence supports the many advantages to be gained by firms when they adopt IIoT, leveraging connectivity sensors, software, and analytics. Often a predetermined goal propels a firm’s impetus to seek an IIoT solution, and the specific outcome is at the core of a decision—although the innovation that IIoT brings is also central to gaining the internal buy-in of an IIoT solution. One key use case for IIoT is the need—applicable to any industry—to transfer knowledge from an aging workforce to a younger one, with minimal loss of expertise. For instance, the adoption of IIoT can help reduce the skills lost by specialists retiring. IIoT-enabled technologies such as VR can also accelerate the training of less experienced staff.

Another use case is that IIoT adoption can accelerate factory productivity rates, via reduced unplanned downtime on the one hand as well as easier and quicker product customization on the other. A third strong argument is that workplaces incorporating IIoT become much safer, benefiting from 24/7 insight into the well-being and safety of worker environments. In emerging markets, meanwhile, IIoT technologies lower the cost of entry in manufacturing by reducing the need for expensive processes like prototyping, at the same time improving quality and worker productivity standards.

Worldwide, shipments of industrial IoT devices across the categories shown in the charts below are forecast to rise to 252 million devices in 2021, up from 99 million in 2016, IHS Markit data show. 

Pilots occur ahead of scale deployments

Manufacturing companies, like those in other sectors, are taking a cautious approach to IIoT adoption. Among the areas of concern are the potential for disruption that results in unexpected downtime, as well as the risk of cyberattacks and loss of data. Companies are, therefore, engaging in trials to help them assess the effectiveness of new technologies and prospects governing return on investment. The deployment of IIoT outside of core production processes, such as for monitoring assets with drones or in logistics, also helps them test the viability and impact of new technologies.

For meaningful change to occur through adoption of IIoT solutions, manufacturers need to implement IIoT in areas of output that directly impact the manufacturing process, such as in motors, machinery, and other capital equipment. By doing so, an IIoT solution can then be unleashed to its full potential—powering the continuous collection and processing of huge reams of data, in order to generate valuable and much-sought-after insights With the growing sophistication of IIoT technology through advances in processing power, software, and platforms, the challenge today is one of broader understanding and acceptance of IIoT within the industrial culture, so that manufacturers can be moved toward greater implementation. For traditional incumbents used to trusting in their systems, a particular fear is placing their valuable cache of data in the cloud, especially at a time when hacking and other cybersecurity threats have become rampant and pervasive. 

The four phases of IIoT

There are four phases of the IIoT with innovation occurring at every phase, creating challenges and opportunities alike for manufacturers and suppliers. These four phases, as defined by IHS Markit, are Connect, Collect, Compute, and Create.

  • In the Connect phase of evolution, the foundational component for IoT, connectivity and processing capabilities are embedded into devices in IoT solutions. For this phase, continuing advances in 5G and private LTE networks will help address issues related to device connectivity—specifically those pertaining to security, capacity, and network coverage. However, the prevailing lack of common connectivity standards remains a concern. Over the next few years, new technologies—such as time-sensitive networking (TSN), private LTE, and 5G—bring the potential for greater bandwidth and lower latency.
  • In the next phase known as Collect, the objective is to gain access to the data of connected devices and move or store this data. With the inclusion of sensor data, the importance of network management and planning is critical for a successful IoT solution. For this phase, standards relating to machine-embedded sensors and I/O link sensor connectivity will improve the data collection and transfer process. At the same time, the role of IIoT gateways—acting as a bridge and the communication protocol translator—between the IIoT device and cloud will expand to support edge analytics/data processing for added intelligence.
  • In the third phase of Compute, the collected data is then analyzed and processed for the user to generate intelligent and informed decisions. Here it becomes critical to identify which data should be handled at the so-called “edge” of networks for local processing; and which data should be aggregated and stored in the cloud or if a hybrid solution is most appropriate. For this phase, security concerns will restrict the adoption of cloud-based services for storage, processing, and analyzing of large data volumes. Edge computing then emerges as a viable alternative to locally processed data, sharing only necessary information with the cloud. 
  • In Create, the final phase, unique solutions provide value to stakeholders through access to transformational data, ultimately achieving very specific, predetermined outcomes, including decreased downtime or outages in operations, reduction in waste, higher cost savings, or new business models. In this phase, existing capabilities such as 3D printing will become more sophisticated, and artificial intelligence will be used more intensively.

Where IIoT stands with various industries

The adoption of IIoT varies across the industrial landscape, depending on the openness of each vertical in embracing IIoT technologies, as well as on individual industry knowledge, conservatism, access to capital, and integration challenges. Analysis is necessarily complex in the case of IIoT, especially because industrial coverage is broad and intersects with multiple vertical markets, resulting in an incredibly diverse set of end customers compared to, say, verticals in the IoT universe.

In the five sample verticals below, IHS Markit tracks the position of each vertical relative to its IIoT evolution phase.

Verticals and the IIoT as rated by IHS Markit

  • The Manufacturing space, for instance, is now at the stage where the industry is using advanced sensors to expand asset monitoring, with edge computing supporting additional data processing capabilities undertaken separately in the cloud.
  • In Energy, the oil and gas industry is seeking to apply advanced analytics on available data to automate upstream processes while also bringing about operational efficiency through increased drilling accuracy, resource savings, and improved equipment maintenance.
  • In Maritime, the industry is moving toward satellite services to capture many potential applications, such as asset tracking, route optimization, and equipment monitoring.
  • In Agriculture, IIoT is being expanded in smaller agricultural units for them to partake in potentially bigger opportunities in the future, even as larger farms move a step further toward drones and robotic technology for large-scale field support.
  • In Chemicals, industrial applications for IIoT remain limited at present, due to the high risks involved in critical assets within the process. The Chemicals industry must also see how they can maximize revenues through IIoT, as revenue opportunities are presented through the billions of plastic IIoT devices projected to be shipped across all industries and providing demand to the specialty plastics sector. 

Overall, the ramifications of IIoT adoption are much greater than comparable considerations for consumer-centric IoT applications. This is because the factors of production involved in IIoT are so much larger and cover critical spheres of human activity, where the failure of systems is not an option.

Moreover, each vertical has some form or component of manufacturing that is involved in the production of its goods, which means there is a direct relationship and impact that can be measured by a manufacturing entity’s adoption of—or failure to adopt—IIoT, for better or worse. For most manufacturers, the roadmap to IIoT adoption will be one dictated by their own needs and customers.

Jenalea Howell is Research Director, Internet of Things, within the IHS Technology Group at IHS Markit
Also contributing to this piece are Sheryna Gurmeet Singh, Research Analyst; and Alex West, Principal Analyst, Smart Manufacturingboth also within the IHS Technology Group at IHS Markit

Posted 10 November 2017

Grid defection is attractive consumer alternative to reduce energy costs

$
0
0

On the back of rapidly decreasing costs for energy storage and solar photovoltaics (PV), consumers wishing to achieve a low-cost and reliable supply of power are considering grid defection—or at least, partial grid-defection—as an increasingly attractive alternative. But while lower energy costs are certainly welcome news for end customers, the same cannot be said for incumbent utilities, many of which will be facing new challenges that call into question their long-standing business model of selling and distributing electricity.

To be sure, grid defection is a term used largely in North America. Yet it is in Europe where the residential solar and energy storage markets have enjoyed greater penetration over the past few years. For instance, Germany alone already has more than 60,000 residential battery storage systems installed today, compared to less than 10,000 grid-connected residential energy storage systems in the United States.

This article, then, will focus specifically on the economics of grid defection in Europe today and in the future, and will also examine how such a development could impact the energy industry.

Grid defection: a competitive tool for customers

In Europe the economics for going off-grid are improving, but practical considerations make grid defection unlikely to be a major disruptor to existing traditional and entrenched systems of consuming power. Even so, partial grid defection—especially a scenario in which PV is coupled with a battery energy storage system (BESS)—is seen by customers as an increasingly attractive option to hedge against future rises in retail power prices.

As part of the latest report published by the IHS Markit Energy Storage Intelligence Service, we carried out in-depth modelling using hourly demand profiles to compare the levelized cost of energy (LCOE) under four scenarios: future retail rates without any onsite generation; a solar PV system alone; a combination of solar PV and battery energy storage to maximise self-consumption; and a full grid-defection scenario equivalent to 100% self-consumption. The termLCOE, whichcompares the relative cost of energy produced by different energy-generating sources,is often touted as the new metric in evaluating cost performance of PV systems.

The modelling showed that:

  • Partial grid defection using solar PV alone is already competitive with retail rates in the four countries analysed—namely, Germany, Italy, the Netherlands, and the United Kingdom.
  • When analysing the LCOE offered by a combined PV and battery storage solution, IHS Markit shows that this case has already reached parity with retail rates in Germany and Italy, and will do so in the United Kingdom by 2023.
  • Complete disconnection from the grid will not become economically attractive until at least 2025, IHS Markit expects. 
  • For battery storage, the so-called sizing sweet spot that is also the most economically attractive lies between 5 and 8 kilowatt-hours in energy capacity for the BESS

Regardless of the economics involved, we do not expect a scenario in which customers are fully disconnecting from the grid or achieving 100% self-sufficiency to become reality in Europe. This is due to both the technical impracticalities stopping such a scenario from taking root, and also because a widely stable electricity grid is already in place that makes complete grid defection unnecessary.

Nonetheless, IHS Markit expects that more than a million residential battery energy storage systems will be installed in Europe by 2025, driven primarily by customers looking to take control of their energy supply and to hedge against future price uncertainty.

New business models offer virtual solution toward 100% independence

As customers look to minimize reliance on the grid as much as possible, they need either active demand-side management or an alternative to consider new business models. A number of players, including the likes of Sonnen, E3DC, or Beegy, have started to act as electricity suppliers, offering customers the capability to achieve so-called virtual independence, through the mechanism described below.

  • In the most common scenario, customers will feed excess solar power into the grid, but the solution provider will meter the electricity and add it to the customer’s account. During times when the customer is unable to generate enough solar power, they receive this electricity back from the supplier.
  • Alternatively, some suppliers are offering flat-rate electricity tariffs. Here the customer—with solar and storage installed—pays a fixed fee, which covers all electricity demand that is not self-produced (regardless of consumption levels).
  • Other players offer Virtual Power Plant solutions, which provide additional revenue to customers by aggregating distributed storage systems to participate in ancillary services markets.

The challenges of decentralised generation and self-consumption

Irrespective of the extent to which customers defect from the grid, or whether they choose new virtual self-consumption models, the shift toward the so-called energy prosumer fundamentally challenges the incumbents—electricity suppliers and network operators alike. In fact, the share of self-consumed power as part of total electricity demand is expected to double by 2020.

Still, IHS Markit predicts that utilities will play a major role in the growing energy storage market. Already, 7 out of the 10 biggest electricity suppliers in Europe, such as E.ON and EDF, are already commercialising propositions related to behind-the-meter energy storage. 

Personally I believe the trend toward grid defection in order to increase energy independence won’t stop. We see strong economic and emotional drivers supporting this shift. At the same time, utilities are clearly starting to understand that the energy landscape is shifting around them, and that they will be able to play a crucial role—not only in commercialising new business models, but also in binding customers through energy storage.

For more information on this and related topics, visit our Energy Storage Intelligence Service.

Julian Jansen is Senior Analyst, Solar, within the IHS Technology Group at IHS Markit
Posted 24 July 2017

Medical lasers: a growing market to treat health and aging

$
0
0

Medical lasers allow high levels of precision to physicians, while inflicting less damage on the surrounding tissue of those being treated than through other treatment techniques. The benefits to patients may include less pain, less scarring, and faster recovery time.

In general, lasers facilitate less invasive surgical procedures. For the aesthetic market in particular, the decline over the years in the cost per procedure associated with lasers, coupled with the increasing general acceptance of cosmetic surgery, has served to attract a broader base of patients.

Many applications for medical lasers also stem from procedures that treat conditions related to age. With the volume of these procedures expected to keep increasing, and with people all over the world for the most part experiencing growing longevity, opportunity is strong for medical lasers to enjoy continued growth as a replacement to traditional modalities across all applications.

However, the medical laser industry is subject to intense competition. For instance, forming part of the research for the IHS Markit report on medical lasers was a group of more than 50 vendors—manufacturers that compete with laser and other energy-based products offered by public companies, with several smaller specialized private companies, and in the future with new companies likely to enter the market. The same group of vendors must also compete with non-laser and non-light-based medical products, including traditional cutting tools and drills; other advanced technologies, such as ultrasound and radiofrequency (RF); and electrosurgical devices. Furthermore, pharmaceuticals are often utilized and so also serve as another form of competition.

Aside from competition, another weakness of the medical laser industry stems from the relatively high cost of equipment, along with the resistance proffered by some physicians who do not believe lasers to be a beneficial alternative when compared to traditional treatment methods.

Overall, the world market for medical laser systems, consumables, and maintenance posted total revenue of $2.9 billion in 2016. Medical laser systems represented the largest sector, with sales of $2.0 billion. By 2022, revenue is forecast to hit $3.7 billion, equivalent to a compound annual growth rate of 10.4% for the six-year period.

Medical laser system revenue by application

The key drivers for the medical laser market include:

  • An increasingly aging population. Many medical laser applications are for procedures (e.g., ophthalmology, dermatology, and aesthetic) to treat age-related conditions. The number of these procedures will increase in tandem with the rise in aging populations worldwide.
  • In the aesthetics market, lower costs and more social acceptance of cosmetic surgery have served to attract a broader base of patients.
  • Medical lasers are likely to see continued growth as replacement to traditional modalities across all applications covered in our IHS Markit report.

The most common medical laser types in 2016 were CO2, Nd:YAG, and diode:

  • Ophthalmology makes use of a wide range of laser types, including solid-state (Nd:YAG and KTP), excimer, gas (CO2 and argon); and to a lesser extent, diode and fiber, which have been noted for retinal photocoagulation. Ultrafast and ultrashort pulse lasers known as femtosecond lasers are used in ophthalmology, and may have a medium of diode, dye, solid-state, or fiber.
  • Dermatology makes use of a wide range of laser types that includes CO2, solid-state (Nd:YAG, Er:YAG, and ruby), diode, dye, and fiber. The most commonly referenced types are CO2 and Nd:YAG.

The United States and Western Europe combined represented the majority of global 2016 medical laser revenue, accounting for 59.4% of the total market. Robust expansion is forecast for both regions through 2022. In comparison, the Eastern European market is one region experiencing constraints because of lingering poor economic conditions, even though the area is expected to see strong growth during the forecast period.

In the Asia Pacific region, Japan has been driving growth. However, China’s expanding middle class is expected to lead in investments in personal care, which will account for the increased use of medical lasers.

Shane Walker is Research & Analysis Director, Healthcare Technology, within the IHS Technology Group at IHS Markit
Posted 17 November 2017

China to account for 34% global RGB OLED capacity in 2022

$
0
0

The latest Display Supply Demand & Equipment Tracker from IHS Markit shows that global AMOLED capacity will rise from 11.9 million square meters (sq m) in 2017 to 50.1 million sq m in 2022, equivalent to 322% growth. The numbers include capacity for RGB OLED—utilized mainly for smartphones, mobile devices, virtual reality, and automotive; as well as capacity for WOLED, also known as White OLED--used primarily for TVs. Of the two segments, RGB OLED has the larger share of market, with capacity growing from 8.9 million sq m in 2017 to 31.9 million sq m in 2022.

Among players, Samsung Display reigns supreme with 87% of global RGB OLED capacity, followed by fellow South Korean rival LG Display, and Tianma and Visionox from China. The Chinese are aggressive in expanding RGB OLED capacity, especially on flexible display technology, and China’s share of market is projected to soar rapidly in the next couple of years.

Developments on the Chinese front include the following:

  • BOE is ramping up B7, its first Gen 6 flexible RGB OLED fab, in late 2017, and is now constructing the new B11 fab; its B12 fab is also being planned. All three are Gen 6 flexible-technology RGB OLED fabs with a capacity ranging from 30,000 to 45,000 substrate sheets per month. Furthermore, After B12, BOE is considering whether to build another two Gen 6 flexible OLED fabs, but the support of local governments will be needed.
  • ChinaStar is currently running a Gen 6 LTPS/TFT LCD fab, while its next fab—the T4, a Gen 6 RGB OLED facility with low-temperature polysilicon (LTPS) backplane capability—is under construction. ChinaStar is also deliberating whether to build another Gen 6 OLED fab after T4.
  • Tianma has an LTPS + OLED fab in Wuhan, in central China, targeting flexible substrates. The fab is now under construction and will be ready for mass production in 2018.
  • EverDisplay is constructing a new Gen 6 Flexible OLED fab in Shanghai, in addition to its current Gen 4.5 rigid OLED fab also in the same city.  
  • Visionox at present operates a Gen 5.5 LTPS+OLED fab for rigid glass substrates. And with an infusion of capital from China’s Black Cow group along with financial support from the local government, Visionox is building a Gen 6 flexible OLED in Gu’an County north of Beijing. The company is also considering another new Gen 6 flexible OLED fab outside Gu’an.

For their target, all the new RGB OLED fabs under construction in China aim to produce flexible substrates at full screen size, utilizing the 18.x:9, 19.5:9 super-wide aspect ratio as well as a curved-edge smartphone display form factor for the next couple of years. In the long term, the fabs will undertake production of bendable and foldable screens for mobile devices.

The charts below show Chinese fabs in development for Gen 5.5 and above as well as for Gen 8.

By 2022, Chinese makers will possess RGB OLED capacity of some 10.7 million square meters, equivalent to 34% share of the global total, Chinese capacity will mostly target the smartphone display market, reserving some volume for virtual reality and augmented reality applications as well as for the automotive space.

Some concerns remain

Despite the aggressive expansion of Chinese RGB OLED makers, concerns remain, especially because flexible OLED technology requires a long learning curve that impacts production variables such as yield rate, stability, and reliability. Another important issue is customer cooperation: Because of the long learning curve, smartphone makers and other similar customers will need patience to work with the just-emergent technology on issues like display panel design and mass production.

Chinese OLED makers also may face fierce onslaught from Samsung Display, a player of indisputable power and might with the commensurate capacity, technological acumen, and strength to block Chinese manufacturers in their attempt to grow.

Indeed, Samsung Display remains the dominant force in RGB OLED for smartphone displays. Its RGB OLED capacity will increase from 7.7 million sq m in 2017 to 16.6 million sq m in 2022, with capacity built on giant fabs like the A4 and A5 in South Korea.

Centralizing giant fabs in one site can be advantageous, offering the best economies of scale, optimum efficiency, and a streamlined supply chain. Massive OLED fabs can also serve as a bulwark of stable and sufficient supply for large purchase orders from key customers; in Samsung Display’s case, these would be its internal customer of the same name, for the Samsung Galaxy line of smartphones; and from Apple, for its industry-leading iPhone. Last, giant fabs are well-placed to enjoy cost savings in operations given that these fabs often already boast smooth throughput—an advantage they can wield over their competitors.

The centralized, giant-fab approach of the Koreans contrasts with the smaller, distributed tactics employed by China’s OLED makers. Because a giant fab at this point would provide little practical benefit, the Chinese are opting instead to build several dispersed and separate OLED fabs. Moreover, domestic players hoping to build infrastructure such as that required for RGB OLED will need the support of subsidies from local governments. As a result, it makes sense for the Chinese manufacturers to build around the country as they chase the available money from government coffers.

This type of investment, however, is not suitable for very-large-scale WOLED initiatives—simply because the risks for failure are too high. This is also one reason why there are numerous state-of-the-art Gen 10.5 fabs in China for TFT LCD, where both the technology and market are comparatively mature, but which isn’t the case with Gen 8.5 WOLED.

The much smaller capacity of the Chinese, compared to that of Samsung Display, may also shape competition in the future. Samsung Display will use its colossal capacity to meet the needs of the South Korean maker’s two critical customers—Samsung and Apple. Chinese makers, meanwhile, will be targeting the relatively smaller projects of smartphone makers in China, such as Huawei, Xiaomi, Vivo, Oppo, Meizu, Lenovo, and ZTE, along with many white-box makers.

With the smartphone war now expanding from brand allure to also include essential components and features such as the phone’s display, the challenge facing China’s makers will be most acute during a stagnant or oversupply market situation. Because Apple and Samsung on their own won’t be able to digest all the excess RGB OLED panel output produced by Samsung Display in the event of market stasis, the South Korean titan will bring to bear all of its considerable heft and might to go after the local Chinese smartphone makers that currently make up the customer base of domestic OLED makers.

That is certain to put Chinese players in a bind, as they will inevitably have to compete with a powerful foreign-based player on the home front, on ground zero where they are supposedly most secure.

By 2022 and assuming that expansion is carried out by each key player accordingly, RGB OLED capacity for each player will stand at the levels shown below, based on IHS Markit forecasts in the Display Supply Demand & Equipment Tracker. The list includes only those companies deemed to have a greater than 30% possibility of investing in forthcoming RGB OLED capacity.

  • Samsung Display: 16.6 million sq m; 52% capacity share of world total
  • BOE: 4.8 million sq m; 15% capacity share
  • LG Display: 3.4 million sq m; 11% capacity share
  • Tianma: 1.9 million sq m; 6% capacity share
  • ChinaStar: 1.5 million sq m; 5% capacity share
  • Visionox: 1.4 million sq m; 4% capacity share
  • EverDisplay: 1.2 million sq m; 4% capacity share

David Hsieh is Research & Analysis Director within the IHS Technology Group at IHS Markit
Posted 27 November 2017

Viewing all 327 articles
Browse latest View live