Page not found - DPV Group

DPV Group

Strategy consulting & executive-education

Contact

Digital Delusions

Part Three of Value-Delivery in the

Rise-and-Decline of General Electric

Illusions of Destiny Controlled & the Word’s Real Losses

By Michael J. Lanning––Copyright © 2022-23 All Rights Reserved

This is Part Three of our four-part series on GE’s great rise and eventual dramatic decline, seen through our value-delivery lens. Businesses as much as ever need strategies that deliver superior value to customers, including via major product-innovation. GE’s lengthy history offers striking lessons on this key challenge. Previous posts in this series provided an Introductory Overview, then Part One on GE’s first century (1878-1980), followed by Part Two on the Jack Welch era (1981-2010). Now, this Part Three reviews GE’s period of Digital Delusions (2011-2018). (Note from the author: special thanks to colleagues Helmut Meixner and Jim Tyrone for tremendously valuable help on this series.)

In 2011, in the wake of the global financial crisis, GE launched a “digital transformation,” a much praised initiative aiming for major new business and revenues from software. The company had presciently foreseen an explosive growth in the “Industrial Internet,” the installation and connection of digital technology with industrial equipment. In GE’s vision, this phenomenon would result in massive data about equipment-performance and would enable the company to establish a powerful software-platform as the “Operating System” for the Industrial Internet. GE therefore expected the initiative to pay off handsomely.

However, GE overestimated three factors: the potential benefit to industrial customers from applying this resulting data; the willingness of customers to share that data with GE; and the ability of GE to establish a viable software platform. As a result, this initiative produced very disappointing results. Even more important, it ignored the fundamental goal that should have been GE’s top priority after 2011: reestablishing its strategy––as prior to Jack Welch in 1981––for creating large, new, innovation-based product-businesses. Thus, the period of this transformation, 2011-2018, proved mostly one of Digital Delusions.

*   *   *

By 2011, GE had gone through two major strategic eras since its 1878 founding, and now needed new direction. In the first of these eras––GE’s first century, through 1980––it had created huge, profitable, electrically related product-businesses, via major product-innovations. However, after growth slowed in the 1970s, the impatient, aggressive Jack Welch and successor Jeff Immelt essentially abandoned that earlier strategy. Instead, in its second era––1981-2010––GE followed Welch’s radical, hybrid strategy, leveraging the strengths of its traditional product-businesses to dramatically expand its financial ones.

This ingenious strategy made GE the world’s most valued and admired corporation by 2000. Yet, longer-term it proved a major error. GE’s financial business could not sustain its hyper-fast growth much past 2000. Finally––riskier than the company had understood––this high-flying engine of the company’s growth largely collapsed in the 2008-2010 crisis.

In its third era––2011-2018––GE could have returned to its highly successful, first-era strategy. Instead, although downsizing its financial business, GE otherwise continued the key error of its second era––no longer creating large new, innovation-based product-businesses. GE’s star had faded somewhat, while the tech giants had begun to dominate the business-community’s imagination. GE wanted to retake leadership––so, it too would become a “digital” super-star. Welch’s GE had made a bad long-term bet on the hybrid strategy; Immelt’s GE now made another bad bet, this time, on “digital transformation.”

In this widely applauded transformation, GE formed a new business, GE Digital. It aimed to be a leading software provider, focused on reducing equipment-downtime and thus maintenance costs, and developed a major cloud-based software platform. By 2017, however, after spending close to $7 billion on this digital transformation, it was clear that GE would not reach even 10% of its $15 billion software-revenue goal. By then, only 8% of GE’s industrial customers were using its cloud-based Predix software. In 2018, when Digital was spun off, it only showed $1.2 billion in annual software revenue. More important than missing its software dreams, GE in its third era would fail to reestablish a strategy for creating large new, innovation-based product-businesses. GE’s celebrated digital transformation would ultimately prove little more than a digital delusion.

After the Crisis––An Incomplete and Misguided Strategic-Refocus

In the wake of the global financial crisis, Immelt’s team did see the need to refocus GE on its traditional––non-financial––businesses. Revenues from GEC––its financial businesses––had peaked by 2000 at about 50% of total GE. They declined in the next decade but were still over 30% of GE in 2009. Reacting to the crisis, GE exited consumer finance, and GEC dropped below 12% of GE by 2010, to dwindle further in the next few years.

As the NY Times’ Steve Lohr writes in December 2010: 

So G.E. has revamped its strategy in the wake of the financial crisis. Its heritage of industrial innovation reaches back to Thomas Edison and the incandescent light bulb, and with that legacy in mind, G.E. is going back to basics. The company, Mr. Immelt insists, must rely more on making physical products and less on financial engineering… Mr. Immelt candidly admits that G.E. was seduced by GE Capital’s financial promise––the lure of rapid-fire money-making unencumbered by the long-range planning, costs and headaches that go into producing heavy-duty material goods.

However, while GE did thus deemphasize its financial-businesses, it otherwise failed to completely refocus on its first-century strategy––creating huge new product-businesses via major innovation. While it could have grown its core, mature businesses––Electric Power, Lighting, Appliances, Jet Engines, and Medical Imaging––at GDP rate (2% for 2010-2015), GE wanted much faster growth. For that, it would have needed to enter other large markets with potential GE-synergies.

Sure enough, GE under Welch and Immelt had abandoned or neglected several such markets, including semiconductors, computers, electronics, and electric vehicles. Aggressively entering or reentering some of these, GE’s strategic roots might have enabled it to again deliver superior value and create major new businesses. Unfortunately, GE did not try this complete strategic refocus, choosing instead to follow the route of digital transformation.

After 2011, GE foresaw the rise of the Industrial Internet, central to its vision of digital transformation. Millions of powerful-but-inexpensive sensors would be connected to industrial equipment, yielding enormous data. “Advanced Analytics,” including machine-learning and artificial intelligence (AI), could mine that Big Data for insights into equipment efficiency-and-performance improvements.

GE would, it said, become a “software and analytics company.” Starting in 2012, it had developed a cloud-based software-platform, Predix, meant to become the “operating system for the industrial internet.” By 2015, the company formed a new business, GE Digital. Hoping for major new revenues––like the earlier financial business––GE projected that it would become a “top-ten” software provider, with revenues by 2020 of $15 billion (~19% of product-business revenues). The business community and media applauded this trendy, new vision of digital transformation.

However, it was a muddled vision. GE had always supported its product businesses with a software function, but now tried to convert that support function into a separate business with its own revenues––GE Digital. This vision inevitably created confusion and conflict over the purpose of GE software applications––whether to focus primarily on supporting GE product-businesses, or on maximizing GE Digital’s software revenues.

Compounding this confusion, GE underestimated the challenges of developing all the software needed, and the need for customized––not generic––customer-solutions. Having been early to focus on the Industrial Internet, GE over-confidently underestimated the intensity of competition that would quickly emerge for its software, and already existed for the cloud-based platform. It also assumed that customers, including competitors, would share proprietary data, enabling GE’s analytics to uncover solutions But, many customers had no interest in sharing such sensitive data, nor in becoming reliant on GE software.

Foreseeing––and Betting Big On––the Industrial Internet

A 2012 GE paper predicted the onset of the Industrial Internet, which they argued would usher in a new era of innovation. This advance would be enabled in part by analytics, with their predictive algorithms. It would also be facilitated by the millions of inexpensive industrial sensors, and the connectivity provided by the Internet. In this vision, GE would play a leading role in developing and utilizing this Industrial Internet.

After Jeff Immelt announced this vision at a 2012 conference in San Francisco, the business media strongly embraced it. Soon, the Industrial Internet was also termed the Industrial Internet of Things (IIoT). In a 2014 Fast Company article, “Behind GE’s Vision for the IIoT,” Jon Gertner looks back at that presentation:

GE could no longer just build big machines like locomotives and jet engines and gas turbine power plants–“big iron,” … It now had to create a kind of intelligence within the machines, which would collect and parse their data. As [Immelt] saw it, the marriage of big-data analysis and industrial engineering promised a nearly unimaginable range of improvements.

GE correctly predicted that data generated by the IIoT, and analytics applied to that data, would both grow explosively. The company began investing in resources to play a leading role in the emerging IIoT. Laura Winig, in a 2016 MIT Sloan Review article, wrote that GE had “bet big” on the Industrial Internet, investing “$1 billion” adding sensors to GE machines, connecting them to the cloud and analyzing the resulting data to “improve machine productivity and reliability.” GE and others expected this digital transformation to payoff especially in one key benefit for industrial businesses––improved predictive maintenance.

Digital Transformation as Central GE Strategy

By 2016, Immelt’s team and others were referring to GE as the First Digital Industrial Company––a change they saw as a key element in digital transformation. Winig further comments on a GE advertising campaign, then recently launched:

The campaign was designed to recruit Millennials to join GE as Industrial Internet developers and remind them—using GE’s new watchwords, “The digital company. That’s also an industrial company.”—of GE’s massive digital transformation effort.

Ed Crooks, in Financial Times January 2016 discusses GE installation of systems for collecting and analyzing manufacturing data, in a growing number of its factories:

The technology is just part of a radical overhaul designed to transform the 123-year-old group [GE] into what Jeff Immelt, chief executive since September 2001, calls a “digital industrial” company. At its core is a drive to use advances in sensors, communications and data analytics to improve performance both for itself and its customers.

“It is a major change, not only in the products, but also in the way the company operates,” says Michael Porter of Harvard Business School. “This really is going to be a game-changer for GE.”

However, it is not clear in retrospect why this transformation should have been such a priority. The new digital technology could and should have been used by GE to enable the value-delivery strategies of the company’s product businesses, without transforming GE into a “Digital Industrial Company.” Nonetheless, conventional wisdom continued to insist that this digital transformation was mandatory. As the business-media and others seemed to agree––GE was ahead, and industrial businesses needed to catch up. Steve Lohr, also in 2016 in the NY Times, discussed the widespread hype for GE’s software effort, quoting another HBR professor:

“The real danger is that the data and analysis becomes worth more than the installed equipment itself,” said Karim R. Lakhani

In a 2017 Forbes article, Randy Bean and Thomas Davenport even issue a warning––GE had achieved its digital transformation:

Mainstream legacy businesses should take note. In a matter of only a few years, GE has migrated from being an industrial and consumer products and financial services firm to a “digital industrial” company with a strong focus on the “Industrial Internet” ….[achieved by] leveraging AI and machine learning fueled by the power of Big Data.

Also in 2017, HBR published an interview with Immelt, How I Remade GE. It implied that GE’s digital initiative had achieved another great GE triumph, thanks to the decision by Immelt and the company to go “All In.” As Jeff explains:

We’ve approached digital very differently from the way other industrial and consumer products companies have. Most say, “We’ll take an equity stake in a digital start-up, and that is our strategy.” To my mind, that’s dabbling. I wanted to get enough scale fast enough to make it meaningful.

Thus, GE had earlier, enthusiastically launched Digital and the Predix platform business. However, by late 2017, it was dawning on observers that GE’s digital transformation might be ahead of others but lacked clear evidence of building a viable software business. Partly for this reason, Immelt would be forced out of GE later that year.

Lohr in the NY Times, later that year, writes that:

G.E. not only weathered the financial crisis but also made large investments. A crucial initiative has been to transform G.E. into a “digital-industrial” company, adding software and data analysis to its heavy equipment. But the digital buildup has been costly. G.E. will have invested $6.6 billion, from 2011 through the end of this year, with most of the spending in the last two years… Such investments, however, had a trade-off, as they sacrificed near-term profit for a hoped-for future payoff.

By 2018, the business media began to note the apparently wide-spread failure of digital transformations. Many consultants and academics had become devoted to the view that such transformation was mandatory, so many argued (and still do today) that companies needed to try harder. However, the obstacles seemed to be formidable, as numerous companies fell short. In 2018, Davenport and George Westerman would write in HBR:

In 2011, GE embarked upon an ambitious attempt to digitally transform its product and service offerings. The company created impressive digital capabilities, labeling itself a “digital industrial” company, embedding sensors into many products, building a huge new software platform for the Internet of Things, and transforming business models for its industrial offerings. The company received much acclaim for its transformation in the press (including some from us). However, investors didn’t seem to acknowledge its transformation. The company’s stock price has languished for years, and CEO Jeff Immelt…recently departed the company under pressure from activist investors.

Most fundamentally, GE’s Digital venture and its pursuit of digital transformation did not help the company redirect its strategy toward creating large new, innovation-based product-businesses. More specifically, GE in its third strategic era––2011-2018––made three bad bets in particular, focused on overestimations of: the value of reduced equipment-downtime; the willingness of customers to share proprietary information; and GE’s ability to provide a universal operating system for the industrial internet.

GE’s Bad Bet on the Inherent Universal Value of Reduced Downtime

Industrial companies had long worked to improve predictive maintenance (PM)––predicting and avoiding equipment-failure, thus reducing unplanned downtime and maintenance costs. After about 2010, improved PM was enabled, in principle, by the widespread advances in quantities of Big Data collected, and in the power of analytics. Savings in GE’s own manufacturing seemed possible, but potential customer-value seemed especially exciting. As Immelt explains in his 2014 Fast Company interview:

Being able to walk into the offices of an airline or a freight CEO and tell him that data might ensure that GE jet engines or locomotives would have no unplanned downtime could change the way Immelt’s company does business. GE products could almost sell themselves––or, if his competitors had the capability to do this first, stop selling at all.

Given the history of GE’s product businesses since the 1980s, it was a natural extension of that evolution for Immelt’s GE to now focus heavily on PM. Under Welch, the company had reduced its commitment to major advances in the benefits delivered by its products, shifting its relative focus to service benefits. Not surprisingly, GE now saw improved PM as a major prize of digital transformation. Winig explains the value of PM:

While many software companies like SAP, Oracle, and Microsoft have traditionally been focused on providing technology for the back office, GE is leading the development of a new breed of operational technology (OT) that literally sits on top of industrial machinery. Long known as the technology that controls and monitors machines, OT now goes beyond these functions by connecting machines via the cloud and using data analytics to help predict breakdowns and assess the machines’ overall health.

Therefore, improving PM of a customer’s GE equipment, via Big Data and analytics, made sense––but only if the obstacles and cost of doing so were outweighed by the benefits of reduced unplanned-downtime. This result turned out to be disappointingly problematic.

First, reducing an industrial-customer’s unplanned downtime is highly valuable only in cases where that downtime would be crucially disruptive and expensive for that customer. Such a result can occur, for example, when that unplanned downtime of some equipment shuts down an entire operation for a significant time period. However, when the customer’s operation has ready access to replacement or alternative equipment, the total costs of the unplanned downtime maybe fairly minor. In such a case, the cost of achieving highly accurate PM may not be justified.

GE’s Bad Bet That Industrial Customers Would Willingly Share Proprietary Info

On the other hand, in some cases, eliminating or greatly reducing unplanned downtime––such as via great PM––has great value for the customer. However, this great PM depends on access to extensive customer-data that can be used to analyze the customer’s equipment and predict likely breakdowns. For such data, GE realized that it needed to include customer data from much more than its own equipment. As Winig continued:

GE had spent years developing analytic applications to improve the productivity and reliability of its own equipment…GE’s strategy is to deploy these solutions and then expand to include non-GE plant equipment as part of the solution. GE wants to go beyond helping its customers manage the performance of individual GE machines to managing the data on all of the machines in a customer’s entire operation. Customers are asking GE to analyze non-GE equipment because those machines comprise about 80% of the equipment in their facilities. They’re saying, ‘It does me no good if my $10 million gas turbine runs at 98% reliability if I have a valve that fails and shuts my entire plant down.’

Therefore, to conduct the analysis needed to improve a customer’s PM, GE needed access to data from all a customer’s equipment––including non-GE equipment, even competitors’ machines. This requirement would pose a bigger conflict with customers’ priorities than GE first anticipated. As Winig’s Sloan Review colleague Sam Ransbotham adds:

Unsurprisingly, GE has struggled with this step, as everyone loves the idea of benefiting from everyone else’s data, but is far less excited about sharing their own—a tragedy of the commons. The potential is there, but incentives are not well aligned.

GE never made much progress convincing customers to disregard this conflict. Moreover, collecting and analyzing the data needed to improve PM proved more complex than GE expected. Retrospectively in 2019, Brian Bunz in IoT World Today commented that we can still expect additional progress in PM, but that applying machine learning, including for PM, is often dauntingly complex. He cited a Bain & Co. study among 600 high-tech executives, finding that IIoT and PM in particular are often complex to implement, frustrating the effort to capture valuable insights.

GE’s Bad Bet on Providing the Operating System for the Industrial Internet

In an important aspect of its digital transformation, GE assumed it needed to build a new software platform that could handle massive data sets, enabling the connection and analysis of that data. Thus, GE thought, such a platform could allow the key, powerful analyses that would improve equipment efficiency and performance. Thus, such a platform, as envisioned by GE, could become the “operating system of the Industrial Internet.” The company therefore began to build such a platform, which it named Predix.

However, playing such a central role for most or all industrial companies––including many competitors to GE––was not a realistic goal. And more importantly, playing this “operating system” role was not necessary. Many observers have speculated on why Predix failed, asking in effect what the Predix value proposition was, and why it wasn’t well enough delivered. More fundamentally, however, Predix and such an operating system did not have a reason for being and thus no superior value proposition it could deliver.

GE, nonetheless, concluded that it should and could gain a dominant position in cloud computing, as part of becoming the operating system of the Industrial Internet, and so it launched Predix in 2013. For The Street in June 2015, James Passeri writes that:

General Electric is betting its cloud-based Predix platform will do for factories what Apple’s iOS did for cell phones…GE has rolled out 40 apps this year for industrial companies using its rapidly growing industrial software platform, which combines device sensors with data analytics to optimize performance and extend equipment life…

 

However, digital giants were already providing and investing heavily in cloud-based services. Amazon Web Services (AWS) launched in 2006, Microsoft’s Azure in 2008, followed by Google and later Siemens, IBM, and others. Yet, GE seemed to imagine that since it focused on industrial––not consumer––data, it need not compete with established cloud players. Passeri quotes Bill Ruh, GE’s VP of global software, on the supposedly unique advantages of Predix:

“In the future, it’s going to be about taking analytics and making that part of our product line like a turbine, an engine or an MRI” machine… “This cloud is purpose-built for the industrial world as other clouds are built for the consumer world,” he said.

In 2016, GE and some observers believed that Predix put GE in position to lead and even control most or all IIoT software and its use. Winig, in MIT Sloan Review, comments that:

GE executives believe the company can follow in Google’s footsteps and become the entrenched established platform player for the Industrial Internet— and in less than the 10 years it took Google.

GE may have seen Microsoft as an even more relevant model for GE’s possible domination of IIoT software. Immelt in his 2017 retrospective interview in HBR continues:

When we started the digital industrial move, I had no thought of creating the Predix platform business. None. We had started this analytical apps organization. Three years later some of the people we had hired from Microsoft said, “Look, if you’re going to build this application world, that’s OK. But if you want to really get the value, you’ve got to do what Microsoft did with Windows and be the platform for the industrial internet.” That meant we would have to create our own ecosystem; open up what we were doing to partners, developers, customers, and noncustomers; and let the industry embrace it.

So we pivoted. Again we went all the way. We not only increased our investment in digital by an order of magnitude—a billion dollars—but also told all our businesses, “We’re going to sunset all our other analytics-based software initiatives and put everybody on Predix, and we’re going to have an open system so that your competitors can use it just like you can.”

However, Predix never became a widely accepted product, so it lacked the marketplace leverage of Microsoft’s Windows. Moreover, Predix’s success required access to industrial customers’ proprietary data, the same obstacle that generally inhibited GE’s success with Predictive Maintenance, as discussed earlier. In 2016, The Economist writes:

Whereas individual consumers are by and large willing to give up personal information to one platform, such as Google or Facebook, most companies try to avoid such lock-in. Whether they are makers of machine tools or operators of a factory, they jealously guard their data because they know how much they are worth.

By 2017, it was becoming clear that the triumph of Predix would be deferred, partly reflecting this conflict between the vision GE wanted and some of the key priorities of customers. In June of 2017, Ed Crooks in Financial Times writes that:

Last year there were 2.4bn connected devices being used by businesses, and this year there will be 3.1bn, according to Gartner, the research group. By 2020, it expects that number to have more than doubled to 7.6bn. …[However] the shiny digital future is arriving later than some had hoped. The potential is real, says McKinsey senior partner Venkat Atluri, but industrial companies have been slow to exploit it for a variety of reasons. They may need to change their organisations radically to benefit from the new technologies. Another obstacle is that there are so many different products and services available that industrial standards have not yet emerged.

Working with expensive and potentially hazardous machinery, industrial businesses are cautious about entrusting critical decisions to outsiders. “Customers are risk-averse, because they have to be,” says Guido Jouret, chief digital officer of ABB. “If you do something wrong, you can hurt people.”

Potential customers are also very cautious about control of the data that reveal the inner workings of their operations.

Eventually, it sank into GE that competing with the established cloud providers was not realistic, and that establishing a dominant platform was in any case pointless. Alwyn Scott in Reuters wrote later in 2017 regarding GE missteps with Predix:

Engineers initially advised building data centers that would house the “Predix Cloud.” But [after Amazon Microsoft spent billions] on data centers for their cloud services, AWS and Azure, GE changed course…abandoned its go-it-alone cloud strategy a year ago. “That is not an investment we can compete with,” Ruh said.

It now relies on AWS & expects to be using Azure by late October. As GE pivoted away from building data centers…focused on applications, which executives now saw as more useful for winning business and more profitable than the platform alone. “That is probably the biggest lesson we’re learned,” Ruh said. [Predix] also faced legacy challenges in adapting to Predix software. GE has many algorithms for monitoring its machines, but they mostly were written in different coding languages and reside on other systems in GE businesses. This makes transferring them to Predix more time consuming.

An additional complexity facing GE Digital was that the company’s businesses were of course varied, requiring a range of solutions and value propositions. Related to this issue, tech writer and follower of the IIoT, Stacy Higginbotham, observed that GE’s IIoT effort cannot work with the same solution for every customer (or segment). She writes in late 2017 that:

Since launching its industrial IoT effort five years ago, GE has spent billions selling the internet of things to investors, analysts and customers. GE is learning that the industrial IoT isn’t a problem that can be tackled as a horizontal platform play. Five years after it began, GE is learning lessons that almost every industrial IoT platform I’ve spoken with is also learning. The industrial IoT doesn’t scale horizontally. Nor can a platform provider compete at every layer. What has happened is that at the computing [layer,] larger cloud providers like Microsoft and Amazon are winning.

So when it comes to the industrial IoT, the opportunity is big. It’s just taking a while to figure out how to attack the market. Step one is realizing that it’s silly to take on the big cloud and connectivity providers. Step two is quitting the platform dreams and focusing on a specific area of expertise.

In a similar, common-sensical observation on Medium, product-management and innovation writer Ravi Kumar writesthat GE’s IIoT platform failed because GE tried:

Supporting too many vertical markets: It’s difficult to build a software platform that works across many verticals. GE tried to be everything to everyone and built an all purpose platform for the wider industrial world, which was a diffused strategy.

In retrospect, it would appear that GE failed to look at its venture into Predix in some fundamental ways; ways that a value delivery assessment could very well have identified. There seemed to be an assumption that customers would benefit from Predix rather than a rigorous description of those benefits and the possibly superior value proposition of Predix, and trade-offs that customers would need to make.

However, at least one company, widely applauded, recognized the real potential of the new digital technology. It did so––in contrast to GE––by using IIoT and analytics to enable the strategy of its existing business, not to build a new, separate “digital business.”

Caterpillar––Unlike GE––Used IIoT & Analytics to Enable Its Strategy

Prior to 2012, Caterpillar––the large manufacturer of construction and mining equipment––had long delivered a superior value proposition. Cat’s business model had produced lower life-time cost for customers, via superior equipment-uptime. As Geoff Colvin explains in Fortune in 2011:

The model’s goal is simple: Ensure that customers make more money using Cat equipment than using competitors’ equipment. Though Cat equipment generally costs more than anyone else’s, the model requires it to be the least expensive over its lifetime. For customers, maintenance and uptime are critical.

Kenny Rush, vice president of Sellersburg Stone in Louisville, says that’s why he’s such a fan of the Caterpillar 992 loader his firm uses in a quarry. “We’ve run that machine since 1998, and it’s had 98% to 99% availability,” he says. “It cost $1.5 million. But we ran it 22,000 hours, which is about 10 years, before replacing any major components.” Dealers are key to those economics. A machine that breaks down can halt an entire job, and getting back under way in two hours rather than 48 hours means big money. Large, successful dealers that carry lots of parts, maintain skilled technicians, and move fast are thus a major selling point, and Cat’s dealer network is the undisputed best in the business.

This one clear value proposition––superior equipment-uptime––applied to essentially all of Cat’s customers. Now, after 2012, Cat recognized that improving PM––via analytics applied to massive IIoT-data––would perfectly build on and strengthen Cat’s earlier, uptime-focused strategy. Cat should not be understood as having succeeded in executing the digital transformation strategy that GE had tried but failed to execute. Rather, Cat pursued a strategy focused on delivering its established value proposition of superior uptime, but using the new digital technology to help.

At this time, GE also tried to use the digital technology, but primarily to achieve “digital transformation,” chasing a vague notion of digital strategy. Cat instead integrated the technology into its existing business, enabling rather than replacing its strategy that had long focused on making customers’ more successful.

In doing so, Cat––unlike GE––did not try to create the software it needed but more realistically partnered with an analytics start-up firm, Uptake. As Lindsay Whipp writes in Financial Times, in August 2016:

Uptake, which was reported last year to have a $1bn valuation and in which Caterpillar has an undisclosed stake, uses its proprietary software to trawl the immense amount of data collected from Caterpillar’s clients to predict when a machine needs repairing, preventing accidents and extended downtime, and saving money.

Caterpillar conducted a study of a large mining company’s malfunctioning machine, which had resulted in 900 hours of downtime and $650,000 in repairs. If the technology from the two companies’ project had been applied, the machine’s downtime would have been less than 24 hours and the repairs only $12,000.

Additional articles in 2017, in Forbes, WSJ, and Business Insider emphasized Cat’s improvements in PM via analytics applied to IIoT-data. GE continued generating hype around its software promises of IIoT applications and the utility of Predix––the “operating system for the industrial internet.” More modestly, but delivering more substance, Cat only focused on delivering its specific value proposition for its users. As Tom Bucklar, Cat’s Director of IoT, explains in a 2018 article in Caterpillar News:

When we start to talk about our digital strategy, we really look at digital as an enabler. At the end of the day, we’re not trying to build a digital business. We’re trying to make our customers more profitable.

However, for many industrial businesses––including most of GE’s businesses during this 2011-2018 era––the value of uptime for its equipment varies. Unplanned downtime always has some cost but avoiding it––such as by analytics––does not always justify major investments. In many cases, an operation can affordably work its way around unplanned downtime on a piece of equipment. As Kim Nash, quoting Cat’s Bucklar, explains in WSJ, some of Cat’s most complex machines may be a customer’s most crucial ones. For those, downtime is unaffordable, but a simpler machine could be easily replaced. In that case, there will not be “as much customer value in that prediction.”

Thus, GE could not have simply applied Cat’s specific, superior-uptime value proposition to GE’s businesses, each of which needed to deliver their own specific value proposition. Cat saw it could enable its specific, uptime-focused strategy, via Big Data and analytics. Confused, GE equated its strategy with using data and analytics, and pursuing digital transformation, applying the same, generic digital strategy for all customers.

Caterpillar did not need to develop a software platform––it simply needed to deliver its clear, superior value proposition. Likewise, GE did not need to make the Predix platform work–––GE needed to choose a clear value proposition in each major business and then use the digital technology to help deliver each of those propositions successfully.

GE failed in two key ways. First, it failed to focus on specific strategies to profitably deliver superior value in its various businesses. And, secondly, it failed to then use digital technology and capabilities to enable those value-delivery strategies. This mentality, prioritizing the enabling of their strategy rather than prioritizing digital assets and skills, characterizes the approach that Caterpillar––in contrast to GE––so successfully followed. Some observers would argue today that Cat succeeded in a “digital transformation,” but it did not really transform––it simply focused on using analytics to better execute its clear, customer-focused, uptime strategy. Thus, GE failed by focusing too much on its digital transformation, working to become the first digital industrial company, and not enough to develop its value-delivery strategies.

Bitter End to GE’s Digital Delusions

Many observers, after Immelt had been forced out in 2017, continued to see GE Digital and the company’s attempted digital transformation as good ideas, executed badly. Probably the whole initiative could have been pursued more effectively, but the main problem with this third GE strategic era, as with GE’s second era––1981-2010––was the fundamental flaw in the company’s strategy. It wasn’t totally wrongheaded to pursue the development and use of the IIoT and analytics. It was wrong, however, to put this pursuit ahead of restoring GE’s historically successful strategy of creating large new, innovation-based product businesses.

However, the modest fiasco of the digital delusion was not the only major strategic error by Immelt and team after the financial crisis. The company would also fail to rethink its key energy-related strategies. Again chasing after the formulae that had worked in the past, GE’s blunders in this area would prove terminal for Immelt, and leave the company exposed to a final, predictable but misguided restructuring. We tell this final piece in the story of GE’s rise and decline in Part Four, Energy Misfire (2001-2019).

Unsustainable Triumph

Part Two of Value-Delivery in the

Rise-and-Decline of General Electric

Illusions of Destiny Controlled & the World’s Real Losses

By Michael J. Lanning––Copyright © 2022 All Rights Reserved

 

This is Part Two of our four-part series on GE’s great rise and eventual dramatic decline, seen through our value-delivery lens. Businesses as much as ever need strategies that deliver superior value to customers, including via major product-innovation. Though now a faded star, GE’s lengthy history offers striking lessons on this fundamental challenge. Our two recent posts in this series provide an Introductory Overview and then Part One on GE’s first century. (Note from the author: special thanks to colleagues Helmut Meixner and Jim Tyrone for tremendously valuable help on this series.)

When Jack Welch became CEO in 1981, GE was already a leading global firm, with revenues of $27 billion, earnings of $1.6 billion, and market value of $14 billion. Led by Welch, GE then implemented an ingenious strategy that produced legendary profitable growth. By 2000, GE had revenues of $132 billion, earnings of $12.7 billion, and a market value of $596 billion, making it the world’s most valuable company. Yet, this triumphant GE strategy would ultimately prove unsustainable and self-destructive.

Not widely well understood, the Welch strategy was a distinct hybrid. It leveraged GE’s traditional, physical-product businesses––industrial and consumer––to rapidly expand its initially-small, financial-service businesses, thereby accelerating total-GE’s growth. At the same time, it grew those product-businesses slowly but with higher earnings for GE, by cutting costs––including reduced focus on product-innovation. Thus, for over two decades after 1980, this hybrid-strategy produced the corporation’s legendary, profitable growth.

Welch did not fully explain this partially opaque and––given GE’s history––surprisingly financial-services oriented strategy. Instead, his preferred narrative was simpler and less controversial. It primarily credited GE’s great success to Welch’s cultural initiatives––not to the hybrid strategy––and emphasized total-GE growth more than that of the financial-businesses. Yet, the hybrid strategy and the crucial growth of the financial-businesses were the real if partially obscured key to GE’s growth. The initiatives did improve GE’s culture but played little role in that growth. Still, fans embraced this popular, reassuring narrative about initiatives, helping make GE the most admired––not just most valuable––company. Like the ultimately unsustainable Roman triumphs of Caesar, much of the world would hail Welch’s GE success, without recognizing its longer-term flaws.

The Welch strategy seemed to triumph for over two decades, but eventually proved a tragic wrong turn by GE––and a misguided strategic model for much of the business world. In its first century, GE built huge, reliably profitable product-businesses, based on customer focused, science based, synergistic product-innovation. Now, its new hybrid of physical-product-and-financial businesses deemphasized much of this highly successful approach. For over twenty years after 1980, it leveraged GE’s product-business strengths to enable breath-taking growth of its financial-businesses––from $931 million in 1980 to over $66 billion in 2000––in turn driving total-GE’s spectacular, profitable growth.

However, this apparent, widely celebrated triumph was in fact a major long-term strategic blunder. It replaced GE’s great focus on product-innovation-driven growth with a focus on artificially engineered, financial-business growth. That hybrid-strategy triumph produced impressive profitable growth for two decades but was ultimately unsustainable. Its growth engine was inherently limited––unlikely to sustain major growth much beyond 2000––and self-destructive, entailing higher-than-understood risk. Most fundamentally, however it provided no adequate replacement for the historic, product-innovation heart of GE strategy.

Welch-successor Jeff Immelt and team recognized some of the limitations and risks of the hybrid strategy but lacked the courage and vision to fully replace it. In the 2008-09 financial crisis, the strategy’s underestimated risks nearly destroyed the company. GE survived, partially recovering by 2015, but having let product-innovation capabilities atrophy, GE was unprepared to create major new businesses. After 2015, lingering financial risks and new, strategic blunders would further sink GE, but its stunning decline traces most importantly to Welch’s clever yet long-term misguided, hybrid strategy.

GE’s 1981 crossroads––drift further or refocus on its electrical roots?

The GE that Jack Welch inherited in 1981 had always been a conglomerate––a multi-business enterprise. Yet, in GE’s first century it had avoided the strategic incoherence often associated with conglomerates––often unrelated businesses, assembled only on superficial financial and empire-building criteria. In contrast, though technologically diverse, most GE businesses pre-1981 shared important characteristics––mostly all manufacturing products with deep roots in electricity or electromagnetism. They thus benefited from mutually reinforcing, authentically synergistic relationships.

By the 1970s, however, GE had begun slightly drifting from those electrical roots, losing some synergy and coherence. GE plastics, originally developed for electrical insulation, evolved into large businesses mostly unrelated to electrical technology. More importantly, however, GE failed to persist in some key electrical markets. Falling behind IBM, GE exited computers in 1970. Despite software’s unmistakable importance, the company failed to develop major capabilities or business in this area. GE’s Solid State electronics division pursued semiconductors, but with limited success by the late 1970s. These markets were competitively challenging but also essential to digital technology, which has since come to dominate much of the world’s economy. New GE efforts in these markets might well have failed, but few firms had GE’s depth of capabilities, if it had possessed the will to succeed.

In 1980, GE businesses were dominantly electrical- and electromagnetic-related. 84% of revenues were: Power; Consumer (lighting and appliances); Medical Imaging; and Aircraft Engines––linked by turbine technology to Power. 13% were plastics and mining (a major 1976 acquisition). 3% were GE Credit Corp, financing customer purchases. Thus, GE faced a strategic crossroads as Welch took the helm––continue drifting from or redouble its historical focus on product innovation in its electrical roots.

To pursue the second option, Welch and team would have needed to deeply explore the evolving preferences and behaviors of customers in electrical markets, including digital ones neglected by GE. This team might have thus discovered new value propositions, deliverable via major product innovations, reinvigorating GE growth in its core markets.

Instead, Jack Welch led the company in a new strategic direction––further from its roots. It divested mining, but increasingly embraced non-electrically based businesses––expanding plastics and pursuing the glamourous yet strategically unrelated broadcasting-business, acquiring NBC in 1986.

After the mid-1990s, moreover, GE widened its product-businesses’ focus––from primarily manufacturing-based, superior product-performance to include expanded industrial services. These proved more profitable and less reliant on product innovation.

Especially crucial, however, the new Welch strategy would also quietly but aggressively expand GE’s financial services. In his 2001 autobiography Jack: From the Gut (Warner Books), Welch recalls his first impressions of that initially unfamiliar business:

Of all the businesses I was given as a sector executive in 1977, none seemed more promising to me than GE Credit Corp. Like plastics, it was well out of the mainstream…and I sensed it was filled with growth potential… My gut told me that compared to the industrial operations I did know, this business seemed an easy way to make money. You didn’t have to invest heavily in R&D, build factories, and bend metal day after day. You didn’t have to build scale to be competitive. The business was all about intellectual capital—finding smart and creative people and then using GE’s strong balance sheet. This thing looked like a “gold mine” to me.

More difficult to execute than implied by this comment, the hybrid strategy nonetheless worked almost magically well, for two decades. GE Credit Corp (GECC)––later renamed and now termed here, in short, GEC––aggressively leveraged key strengths from GE’s traditional product-businesses. Thus, GEC would produce most total-GE growth in the Welch era, driving the company’s astounding market-value. Yet, with this hybrid strategy GE would eventually lose some of its strategic coherence, its businesses less related, while growth proved unsustainable much beyond 2000 and risk was higher-than-understood.

Meanwhile, GE’s ability to profitably deliver superior value––via customer focused, science based, synergistic product-innovation––had deteriorated. GE had taken a major wrong turn at its 1981 crossroads and would eventually find itself at a strategic dead-end.

Real cause of GE triumph––Welch-initiatives or the hybrid-strategy?

After 1981, Welch launched a series of company-wide cultural initiatives, broadly popular and emulated in the business community. These included the especially-widely adopted Six Sigma product-quality methodology. Another initiative mandated that each GE business must achieve a No. 1 or No. 2 market-share or close the business. Globalization pushed to make GE a fully global firm. Workout involved front-line employees to adopt Welch’s vision of a less bureaucratic culture. Boundaryless encouraged managers to freely share information and perspective across businesses. Services aimed to make service, like new equipment, a central element in GE product-businesses, investing in service technologies and rapidly expanding service revenues. E-business, by the late 1990s, encouraged enthusiastic engagement with e-commerce.

These initiatives likely helped make GE’s culture less bureaucratic, more decisive, and efficient. However, as GE business results soared, the initiatives were also increasingly credited––but unjustifiably––as a primary cause of those results. A leading popularizer of this misguided narrative was Michigan business-professor Noel Tichy. His books hagiographically extolled the Welch approach. In the mid-1980s Tichy led and helped shape GE’s famous Crotonville management-education center, where GE managers were taught that the initiatives were key to GE’s success. Welch agreed, writing in Jack:

In the 1990s, we pursued four major initiatives: Globalization, Services, Six Sigma, and E-Business… They’ve been a      huge part of the accelerated growth we’ve seen in the past decade.

Many shared this view, e.g., in 2008, strategic-management scholar Robert Grant writes:

Under Welch, GE went from strength to strength, achieving spectacular growth, profitability, and shareholder return. This performance can be attributed largely to the management initiatives inaugurated by Welch.

In 2005, a Manchester U. team led by Julie Froud reviewed the over fifty books and countless articles by management experts on GE in the Welch era. Work of Froud’s team debunked the mythology of credit given to the initiatives, finding a repeated pattern in that literature of blatantly confusing correlation with causation. Consistently––but incompetently––writers first cited GE’s inarguably impressive business results, then juxtaposed them with the initiatives––implying causality. For example, the team cites 2003 work by Paul Strebel of Swiss IMD Business School:

Strebel’s first shot announces GE’s undisputed achievement which is “two decades of high powered growth” … In the     second shot, Strebel identifies key initiatives… as “trajectory drivers” that allowed the company to engineer upward shifts in “product/market innovation.”

However, the initiatives did not cause GE’s profitable growth of this era. If they had, we should find similar, dramatic growth across GE’s major business-sectors––since the initiatives were implemented throughout GE. Its product-business sector consisted of Power, Consumer, Medical Imaging, Aircraft Engines, Plastics, and Broadcasting. Its financial-business sector was GEC. As shown below, these two sectors in 1980-2000 grew at radically different rates. Product-businesses grew by +175%––merely in line with other US manufacturers––while GEC grew at the astounding rate of more than +7,000%.

GE Sales (Nominal $) by sector––1980-2000

Total-GE––Product-Bus’s + GEC GE Product-Businesses GEC (Financial Businesses)
1980––$ Billions 25.0 24.0    0.9
2000––$ Billions 132.2 66.0 66.2
Growth––$B (%)           107.2 (530)          42.0 (275)           65.2 (7,108)

Source (this and next three tables): GE Annual Reports, Froud et al, & author’s calculations

GE’s product-businesses also grew in this period, but only cyclically and dramatically slower than GEC. They grew at least 10% in only four of these twenty years, with average annual growth less than 6%. In contrast, GEC grew more than 10% every year but one (1994) with average annual growth over 26% and strong growth throughout (except 1994).

Average Annual Growth (%) by Sector––1981-2000

Total-GE: Product-Bus’s + GEC GE Product-Businesses GEC (Financial Businesses)
1981-2000 8.9 5.7 26.7
1981-1990 9.3 6.7 36.6
1991-2000 8.6 4.7 16.7
1995-2000 13.7 8.6 22.3

GE Real Sales (2001 Prices, net of inflation) by sector––1980-2000

Total-GE: Product-Bus’s + GEC GE Product-Businesses GEC (Financial Businesses)
1980––$ Billions    57.4 54.4   3.0
2000––$ Billions   141.5 72.5 69.0
   Growth $B (%)                 84.1 (246)          18.1 (133)             66.0 (2,280)
CAGR (Compound Annual                    Growth Rate)         4.6% 1.4%              16.9%
   Real GEC-Growth as % of Real Total-GE Growth (66.0/84.1) 78%

Thus, most GE real-growth in this era traced to GEC. In contrast to GEC’s more-than-2,100% real-growth, the product-businesses’ 33% paled. Those businesses’ Compound Annual Growth Rate (CAGR) in real revenues was only 1.4%, while GEC’s was 16.9% and its $66 billion real-growth was 78% of total-GE’s $84 billion real-growth. Clearly, GE’s 1981-2000 by-sector results refute the standard, initiatives-focused GE narrative.

GE’s great, profitable growth during this period primarily reflected––not the Welch initiatives––but his real, hybrid strategy. To comprehend GE’s phenomenal rise, and eventual decline, today’s managers need to understand that hybrid strategy.

A Hybrid Product/Financial-Business––the Real Welch-Strategy

In 1987, GECC was renamed GE Capital Services––“Capital” to many and here simply termed GEC. By then it had evolved into a large, fast-growing multi-unit financial-business, with $3.9 billion revenues––about 10% of total GE. As Froud et al discuss, GEC had developed a wide range of financial services, including for example:

1967––the start of airline leasing with USAir; subsequently leading to working capital loans for distressed airlines [1980s] … By 2001…managed $18 billion in assets

1983––issues private-label credit card for Apple Computer; first time a card was issued for a specific manufacturer’s product

1980s––employers’ insurance, explicitly to help offset cyclicality in the industrial businesses

1980s––became a leader in development of the leveraged buyout (LBO)

1992––moved into mortgage insurance

1980s-90s––one of largest auto finance companies…and [briefly] sub-prime lending in autos

Though quietly leading total-GE growth throughout the Welch era, GEC was not centrally featured in GE’s primary narrative, which focused mostly on total-GE and the Welch initiatives. Nonetheless, the world increasingly noticed that GE’s initially-small financial-business––not just total-GE––was growing explosively, requiring its own explanation.

Thus, to explain its spectacular growth, a secondary narrative emerged emphasizing GEC’s entrepreneurial, growth-obsessed perspective, derived from GE. There is some truth in this explanation––although GEC was a financial-business, it proactively applied GE’s depth of product-business experience and skill. By 1984, Thomas Lueck in the NY Times had noted that, “…Mr. Wright said that his executives are able to rely heavily on technical experts at General Electric to help them assess the risk of different businesses and product lines.” Later, in 1998, John Curran in Fortune primarily explainsGEC’s success in such terms of experience and skills:

The model is complex, but what makes is succeed is not: a cultlike obsession with growth, groundbreaking ways to control risk, and market intelligence the CIA would kill for… [GEC CEO Wendt] readily acknowledges the benefits Welch has brought to every corner of GE: a low-cost culture and the free flow of information among GE’s divisions, which gives Capital access to the best practices of some of the world’s best industrial businesses…

All five of Capital’s top people are longtime GE employees [including three] from GE’s industrial side… This unusual combination of deal-making skill and operations expertise is one of the keys to Capital’s success. Capital not only buys, sells, and lends to companies but also, unlike love ’em and leave ’em Wall Street, excels at running them. … Says Welch: “It is what differentiates [GEC] from a pure financial house.” [GEC’s] ability to actually manage a business often saves it from writing off a bad loan or swallowing a leasing loss.

Continuing, Curran focuses on GEC’s growth-obsession, especially via acquisitions:

The growth anxiety is pervasive… Says Wendt: “I tell people it’s their responsibility to be looking for the next opportunity…” Capital’s growth comes in many forms, but nothing equals the bottom-line boost of a big acquisition. Says [EVP] Michael Neal: “I spend probably half my time looking at deals, as do people like me, as do the business leaders.”Over the past three years Capital has spent $11.8 billion on dozens of acquisitions.

Moreover, we should not underestimate the skill that GE displayed in using acquisitions to build its huge, complex collection of financial businesses. Major misjudgments “would have undermined GE’s financial record,” Froud et al write, explaining that:

By way of contrast, Westinghouse, GE’s conglomerate rival, had its finance arm liquidated by the parent company after losing almost $1 billion in bad property loans in 1990.

Therefore, these entrepreneurial, growth focused product-business skills clearly gave GEC an advantage over competing financial companies, thus helping it succeed. However, its leaders, including Bossidy, Wright, Wendt, and Welch, had the insight to see that GEC could also enjoy two other, decisive advantages over financial-business competitors. These two advantages, much less emphasized in popular explanations of GEC’s success, were its lower cost-of-borrowing and its greater regulatory-freedom to pursue high returns.

Profitability of a financial-business is determined by its “net interest spread.” This spread is the difference is between the business’ cost-of-borrowing––or cost-of-funds––and the returns it earns by providing financial services––i.e., by lending or investing those funds. GEC’s and thus GE’s growth, until a few years after 2000, was enabled by the combination of its entrepreneurial, growth-focused skills and––most importantly––the two key advantages it enjoyed on both sides of this crucial net-interest-spread.

GEC’s cost-of-borrowing advantage––via sharing GE’s credit-rating

Of course, a business’ credit rating is fundamental to its cost-of-borrowing. During and largely since the Welch era, the norms of credit-rating held that a business belonging to a larger corporation would share the latter’s credit rating––so long as that smaller business’ revenues were less than half those of the larger corporation. Thus, GEC during the Welch era shared GE’s exceptionally high––triple-A or AAA––credit rating. It thus held a major cost-of-borrowing advantage over most other financial-business competitors.

GE’s own triple-A rating reflected the extraordinary financial strength and reliability of its huge, traditional product businesses. As Froud et al write in 2005:

GE Industrial may be a low growth business but it has high margins, is consistently profitable over the cycle… This solid industrial base is the basis for GE’s AAA credit rating, which allows [GEC] to borrow cheaply the large sums of money which it lends on to…customers.

In 2008, Geoff Colvin writes in Fortune that while GEC helps GE by financing customers:

In the other direction, GE helps GE Capital by furnishing the reliable earnings and tangible assets that enable the whole company to maintain that triple-A credit rating which is overwhelmingly important to GE’s success. Company managers call it “sacred” and the “gold standard.” Immelt says it’s “incredibly important.”

That rating lets GE Capital borrow funds in world markets at lower cost than any pure financial company. For example, Morgan Stanley’s cost of capital is about 10.6%. Citigroup is about 8.4%. Even Buffett’s Berkshire Hathaway has a capital cost of about 8%. But GE’s cost is only 7.3%, and in businesses where hundredths of a percentage point make a big difference, that’s an enormously valuable advantage. [emphasis added]   

While GEC maintained it, the triple-A rating––and thus, generally, lower cost-of-borrowing––was a highly valuable advantage. It allowed GEC to grow rapidly by profitably delivering superior value propositions to customers, in the form of various financial services at competitively lower costs––interest and fees. It also provided low-cost funding to enable GEC’s aggressive, serial acquisition of new companies, thus further helping GEC rapidly expand. However, sustainability of this credit-rating relied on GEC not reaching 50% of total GE’s size. Yet, GEC would approach that limit by about 2000 if it continued its torrid growth. As The Economist writes in late 2002:

As GE Capital has grown (from under 30% of the conglomerate’s profits in 1991 to 40% in 2001), its prized triple-A credit rating has come under pressure… rating agencies say that they like the way GE manages its financial businesses. But they make it clear that GE can no longer allow GE Capital to grow faster than the overall company without sacrificing the triple-A badge. (Only nine firms still have that deep-blue-chip rating.) No matter how well run, financial firms are riskier than industrial ones, so the mix at GE must be kept right, say the agencies. As usual, the agencies seem to be behind the game: the credit markets already charge GE more than the average for a triple-A borrower.

The strength of this rating declined in the rest of that decade. By 2008, many investors doubted GE’s triple-A rating. As Colvin continues:

The credibility of bond ratings in general tumbled when it was revealed that securitized subprime mortgages had been rated double- or triple-A. GE’s rating clearly meant nothing to investors who bid credit default swaps on company bonds…. The message of the markets: The rating agencies can say what they like; we’ll decide for ourselves.

Finally, GE would lose its triple-A rating as the financial crisis unfolded, in 2009. Nonetheless, GEC’s credit-rating and thus borrowing-cost advantage had been a key factor in its success until 2008 and especially in the 1981-2000 Welch era.

GEC’s returns advantage––via sharing GE’s regulatory-classification

Though less-widely known than its cost-of-borrowing advantage, GEC also held a second major advantage––greater regulatory freedom to earn high returns on its services. In this era, businesses were classified––for purposes of financial regulation––as either financial or industrial (i.e., non-financial). Though clearly selling financial services, GEC––by virtue of being a part of GE––was allowed to share the corporation’s “industrial” regulatory-classification. Echoing the credit-rating rules, GEC could sustain this classification so long as its revenues stayed below 50% of total-GE’s.

With that classification, GEC was largely free from the financial regulations facing its competitors. As early as 1981, Leslie Wayne writes in the NY Times, GEC’s “…success has drawn the wrath of commercial bankers who compete against it but face Government regulations that [GEC] does not.” The bankers had a point, still valid twenty years later. Some of these costly and restrictive regulatory requirements––faced by GEC’s competitors but avoided by GEC––included: levels of financial reserves; ratios of assets-to-liabilities; and the Federal Reserve’s oversight and monitoring of asset quality-and-valuation. Moreover, GEC’s industrial-classification gave it more freedom to aggressively expand via acquisitions and divestitures, as Leila Davis and team at U Mass, in 2014, write:

Laxer regulation compared to [that for] traditional financial institutions has allowed GE to move into (and out of) a wider spectrum of financial services, with considerably less regulatory attention, than comparable financial institutions.

However, GEC’s regulatory freedom via its industrial-classification let it save costs but only by exposing the company to higher financial risks––against which the regulations had been intended to protect “financial” businesses. GEC’s industrial classification left it freer than competitors to earn high returns, unless and until related higher risks came home to roost––as they did in 2008. An example of saving costs but increasing risk was the restructuring of GE’s balance-sheet in this era, in favor of debt. As Froud et al explain:

Because GE does not have a retail banking operation it needs large amounts of debt finance to support its activities of consumer and commercial financing. Thus, the decision to grow [GEC] has resulted in a transformation in GE’s balance sheet. Most of the extra capital comes in the form of debt not equity: at the consolidated level, equity has fallen from around 45 per cent of long term capital employed in 1980 to around 12 per cent by the late 1990s… Almost all of the liabilities are associated with GECS. This restructuring…has been achieved through very large issues of debt: for example, in 1992, Institutional Investor estimated that GE issued $5 to $7bn of commercial paper [a form of inexpensive short-term financing] every day…

This growth in debt would typically impose costs and constraints on a company classified as “financial” but GEC’s industrial classification let it avoid most of this burden, at least until the early 2000s. This higher debt, however, did increase the level of interest-rate and other risk faced by GEC. Yet, for over twenty years there was little challenge heard to this financial restructuring. After all, GE-and-GEC results were consistently outstanding in this era, and as Froud’s team write, analysts may not have fully understood the GEC numbers:

GE is generally followed by industrial analysts because it is classed as an industrial, not a financial firm… Arguably most industrial analysts will have limited ability to understand [GEC], whose financial products and markets are both bewilderingly various and often disconnected from those in the industrial businesses.

This silence, however, was finally broken by an investor outside the analyst community, in 2002 when, as Alex Berenson writes in NY Times:

William Gross, a widely respected bond fund [PIMCO] manager, sharply criticized General Electric yesterday, saying that G.E. is using acquisitions to drive its growth rate and is relying too much on short-term financing. “It grows earnings not so much by the brilliance of management or the diversity of their operations, as Welch and Immelt claim, but through the acquisition of companies––more than 100 companies in each of the last five years––using… GE stock or cheap…commercial paper” … Though the strategy appears promising in the short run, it increases the risks for G.E. investors in the long run, he said. If interest rates rise or G.E. loses access to the commercial paper market, the company could wind up paying much more in interest, sharply cutting its profits.

Gross suggests in Money CNN that GE ignored financial regulatory-requirements:

“Normally companies that borrow in the [commercial paper] market are required to have bank lines at least equal to their commercial paper, but GE Capital has been allowed to accumulate $50 billion of unbacked [commercial paper] …” Gross said.

Of course, GEC could legally disregard this financial regulatory-requirement, thanks to its industrial classification––it was regulated as an industrial, not a financial business. Froud et al add that, “Overall Gross stated that he was concerned that GE, which should be understood as a finance company, was exposed to risks that were poorly disclosed.” Markets briefly reacted to this criticism, but GE weathered the storm easily enough.

As of 2000, the hybrid strategy had successfully produced the company’s legendary, profitable growth. However, two key aspects of GE’s situation had also changed for the worse. During the Welch era GE had significantly reduced its emphasis on product innovation. At the same time, the twenty-year lifetime of growth produced by the hybrid strategy was coming to its inevitable end. Storm clouds would soon enough hover over GE.

Deemphasis on product-innovation during the Welch era

Under Welch and the hybrid strategy, GE missed a huge opportunity to fully extend its great tradition of product-innovation into some of the most important and––for GE, intuitively obvious––electrical markets. These included computers, software, semiconductors, and others––dismissed by Welch and team as bad businesses for GE, in contrast to the “more promising” financial businesses that “seemed an easy way to make money.” Yet, those neglected electrical businesses, despite the need to “invest heavily in R&D, build factories, and bend metal day after day,” evolved into today’s world-dominating digital technologies, where GE might have later become a leader, not spectator.

GE did not wholly abandon product-innovation during the Welch era, as the company continued making significant incremental innovations in its core industrial businesses of power, aviation, and medical imaging. Nonetheless, complementing GE’s failure to create new businesses in the emerging digital-technology markets, the company seemed to deemphasize product innovation generally––apparently carried away by the giddy excitement of rapidly accelerating financial services. Some observers have agreed; for example, Rachel Silverman in the WSJ writes in 2002:

In recent decades, much of GE’s growth has been driven by units such as its NBC-TV network and GE Capital, its financial-services arm; the multi-industrial titan had shifted focus toward short-term technology research…Some GE scientists had been focusing on shorter-term, customer-based projects, such as developing a washing machine that spins more water out of clothes. That irked some of the center’s science and engineering Ph.D.’s, who thought they were spending too much time fixing turbines or tweaking dishwashers. Some felt frustrated by the lack of time to pursue broader, less immediate kinds of scientific research…”Science was a dirty word for a while,” says Anil Duggal, a project leader…

Steve Lohr writes in 2010 in the NY Times:

Mr. Immelt candidly admits that G.E. was seduced by GE Capital’s financial promise––the lure of rapid-fire money-making unencumbered by the long-range planning, costs and headaches that go into producing heavy-duty material goods. Other industrial corporations were enthralled with finance, of course, but none as much as G.E., which became the nation’s largest nonbank financial company.

James Surowiecki writes in The New Yorker in 2015:

In the course of Welch’s tenure, G.E.’s in-house R. & D. spending fell as a percentage of sales by nearly half. Vijay Govindarajan, a management professor at Dartmouth…told me that “financial engineering became the big thing, and industrial engineering became secondary.” This was symptomatic of what was happening across corporate America: as Mark Muro, a fellow at the Brookings Institution, put it to me, “The distended shape of G. E. really reflected twenty-five years of financialization and a corporate model that hobbled companies’ ability to make investments in capital equipment and R. & D.”

Later still, USC’s Gerard Tellis in Morning Consult in 2019 argues in retrospect that “GE rose by exploiting radical innovations,” adding that:

The real difference between GE on the one hand and Apple and Amazon on the other is not industry but radical innovation.GE focused on incremental innovations in its current portfolio of technologies. Apple and Amazon embraced radical innovations, each of which opened new markets or transformed current ones. [Managers should:] Focus on the future rather than the past; target transparent innovation-driven growth rather than manipulate cash; and strive for radical innovations rather than staying immersed in incremental innovations.

So, GE’s product-innovation capabilities decayed somewhat under Welch. Still. the hybrid strategy was undeniably effective, so we can speculate about whether GE might have found a better way to use it. Perhaps GE could have still leveraged the product-businesses’ strengths to grow GEC, but not deemphasize those businesses’ delivery of superior-value via customer focused, science based, synergistic product-innovation.

Yet, a revised hybrid-strategy would have likely still struggled. It would have presented the same financial risks that GE badly underestimated. Its focus on financial services––so different from the product-businesses––would have still created strategic incoherence for GE. Probably most crucial, such a revised strategy would have required a robust, renewed embrace of the above fundamental product-innovation principles that GE had developed and applied in its first century––so different from the GE that evolved under Welch.

This decay in product-innovation capabilities did not prevent the company from achieving its stellar financial results through 2000. However, GE could find itself in a dead-end should the hybrid strategy falter after 2000––which soon enough it did.

Dead-end finale for GE’s unsustainable triumph––2001-2009

In 2000, at the peak of the Welch-led triumph, GE’s rapid, profitable growth seemed likely to continue indefinitely. However, perpetual-motion machines never work, and GEC was no exception. In the first decade of the new millennium, the ingenious hybrid strategy began leading GE––inevitably––into a dead end.

Almost immediately after Welch retired, Immelt and GE felt pressure, as market confidence was shaken by a series of events––the burst of the dot-com bubble, the Enron collapse, various other accounting scandals, and not least the 9/11 attack. The company’s stock price fell and by 2002 its market-value was only 43% of its 2000 peak. GE soon regained its footing and began growing again after 2003, recovering to about 65% of that 2001 market-value peak by 2006. However, the company had lost the magic of the hybrid strategy and failed to adjust.

In the two decades of the Welch era, this strategy had quietly turbo-charged the growth of GEC––which in turn drove total-GE’s spectacular, widely-celebrated profitable growth. To perform this remarkable trick, the hybrid strategy had enabled two crucial advantages for GEC––a lower cost-of-borrowing (via sharing GE’s triple-A credit rating) and more freedom-to-earn-high returns (via sharing GE’s regulatory classification). However, these key advantages depended on keeping GEC’s revenues below 50% of total-GE, a limit nearly reached––49%––by 2000. GE leadership understood that they now had two options––cut GEC’s growth or dramatically accelerate the product-businesses.

Since GEC had long been the goose that laid golden eggs, GE was naturally loath to throttle it back. Thus, accelerating the product-businesses was preferable. Indeed, during GE’s first century––preceding Welch––the company had repeatedly expanded and created major new product-businesses. It had done so largely by delivering superior value to customers, led by science-based, synergistic product-innovation. However, GE under Welch had lost much of that product-innovation habit in its product-businesses. These were now managed, instead, for earnings, mostly via slow growth and relentless cost-savings––what Jack called “productivity.” To suddenly generate major new growth of the product-businesses––enough to make up for GEC’s previous, exceptionally-high growth––might have been possible, but a tall order. Moreover, to do so via organic growth––rather than by acquisition––would have been even more challenging.

Not surprisingly, therefore, Welch and successor Jeff Immelt–– to keep the hybrid-strategy growth-engine running––turned to giant industrial acquisitions. Indeed, Welch had hoped to cap his career in 2000 with an enormous $45 billion acquisition of Honeywell. That would have added $25 billion in revenues to GE’s product-businesses, reducing the financial-businesses’ 49% share of GE––down to 41%. That would have bought three-to-four more years during which GEC could have continued growing at its late 1990s rate before again reaching the 50%-of-total-GE limit.

If this tactic had worked, Welch and Immelt likely imagined that GE could repeat it every few years with another block-buster acquisition. GE’s efforts to grow might have thus evolved into no-more than a serial reliance on acquisition––like many essentially-failed, imagination-challenged conglomerates since the 1960s. However, before even that dubious plan could be realized, the European regulators blocked the Honeywell deal, based on anticompetitive concerns, which some––e.g., NY Times’ Andrew Ross Sorkin––suggested had been clear from the outset, but ignored by Welch. Undismayed, as Drake Bennett in Bloomberg wrote in 2018, Immelt after 2000 pursued other large, industrial prey:

He made a series of acquisitions [e.g., 2003––$5.5 billion Vivendi-Universal, $9.5 billion Amersham]. These proved more expensive and less synergistic than promised. Scott Davis, a longtime GE analyst…calculated that GE’s total return on Immelt’s acquisitions has been half what the company would have earned by simply investing in stock index mutual funds.

Shown below are GE revenue-results in real-dollars by major segment, from 2000 until just before the impact of the 2008-09 financial crisis. In this period, Immelt and team had to constrain GEC’s growth to stay within the mandatory 50%-of-total-GE limit. Indeed, in real-dollars GEC declined, with a -1.9% CAGR. Aside from acquisitions, GE still needed growth, no longer just earnings, from the product-businesses. Immelt thus increased R&D spending and likely squeezed those businesses less for cost savings. Their growth responded, increasing from 1.4% CAGR in the 1981-2000 period to 2.3% in the 2000-2008 period. However, this increase could not replace GEC. In contrast to total-GE’s 1980-2000 real-sales CAGR of 4.6%, the 8 years 2000-08 real-sales CAGR was only 0.36%.

GE Real Sales (2012 Prices) by sector––2000-2008

Total-GE: Product-Bus’s + GEC GE Product-Businesses GEC (Financial Businesses)
2000––$ Billions  167.9    83.8    84.1
2008––$ Billions  172.9  100.8    72.1
   Growth $B (%)                   5.0 (103)             17.0 (120)           -12.0 (86)
CAGR                  0.36%           2.3%            -1.9%

After about four years of feeling pressure, lacking a reliable basis for growth, Immelt and team unsurprisingly began to take additional risks. In 2014, The Economist commented that, “in his early years as boss, Mr. Immelt let the financial side continue to swell.” GE had benefited earlier from a bubble in gas-turbine demand, and now:

As another bubble inflated, in housing, GE Capital expanded its mortgage lending, bought up corporate debt and muscled into commercial property, all of which left it horribly exposed when Lehman Brothers’ collapse in September 2008 led to a markets meltdown.

Continuing in 2014, Davis et al at U Mass write:

In 2004, GE moved into the subprime mortgage industry with the acquisition of Western Mortgage Company (WMC). When GE divested [WMC] in 2007 after the subprime bubble burst ([GE] losses estimated at more than one billion dollars), [GEC] was the tenth-largest subprime mortgage lender in the U.S.—ranking above well-known examples of financial companies including Lehman Brothers, Citigroup, and Wachovia…

In 2008, the commercial-paper market froze––confirming Gross’ concerns and creating a temporary crisis for GE which suddenly could not roll over its short term debt. GE and the world was seeing the appearance of a Black Swan––the term Nassim Taleb coined for highly unpredictable events of massive impact. In this case, the black swan was the global financial crisis. In March 2009, as Gryta and Mann write later in the Wall Street Journal:

General Electric was on the brink of collapse. The market for short-term loans, the lifeblood of GE Capital, had frozen, and there was little in the way of deposits to fall back on. The Federal Reserve stepped in to save it after an emergency plea from Immelt.

 

*    *    *

The Welch hybrid-strategy had taken GE to the apex of its triumph in 2000. Continued by Immelt, this strategy had then taken GE to a dead-end. It could no longer drive growth by priming GEC; at the same time, after letting GE’s product-innovation capabilities atrophy for over two decades, the product-businesses could not pick up the slack. GE would partially recover in the next few years, only to stumble again after 2015. The great company was not dead but had lost its way.

In the next part of this blog, Part Three, we explore GE’s Digital Delusions. This would be Immelt’s attempt to replace the financially-driven Welch strategy with a new grand but unrealistic vision of GE as a tech giant for the industrial world. That Part Three post will be followed by Energy Misfire––Part Four. Not yet published, that final post in this GE Series will focus on GE’s failure after 2001 to create a new, viable global strategy for energy, missing its greatest opportunity of the past fifty years in the process.

Customer-Focused, Science-Based and Synergistic Product-Innovation

Part One of Value-Delivery in the

Rise-and-Decline of General Electric

Illusions of Destiny-Controlled & the World’s Real Losses

By Michael J. Lanning––Copyright © 2021 All Rights Reserved

 

This is Part One of our four-part series analyzing the role of strategy in GE’s extraordinary rise and dramatic decline. Our earlier post provides an Introductory Overview to this series. (At the end of this Part One, the reader can use a link that is provided to Part Two.)

We contend that businesses now more than ever need strategies focused on profitably delivering superior value to customers. Such strategy requires product-or-service innovation that is: 1) customer-focused; 2) often science-based; and 3) in a multi-business firm, shares powerful synergies.

As this Part One discusses, GE in its first century––1878-1980––applied these three strategic principles, producing the great product-innovations that enabled the company’s vast, profitable businesses.

After 1981, however, as discussed later in Parts Two-Four, GE would largely abandon those three principles, eventually leading the company to falter badly. Thus, Part One focuses on GE’s first-century and its use of those three strategic principles because they are the key both to the company’s rise and––by neglect––its decline.

Note from the author: special thanks to colleagues Helmut Meixner and Jim Tyrone for tremendously valuable help on this series.

Customer-focused product-innovation––in GE’s value delivery systems

In GE’s first century, its approach to product-innovation was fundamentally customer-focused. Each business was comprehensively integrated around profitably delivering winning value propositions––superior sets of customer-experiences, including costs. We term a business managed in this way a “value delivery system,” apt description for most GE businesses and associated innovations in that era. Two key examples were the systems developed and expanded by GE, first for electric lighting-and-power (1878-1913), and then––more slowly but also highly innovative––for medical imaging (1913 and 1970-80).

GE’s customer-focused product-innovation in electric lighting-and-power (1878-1913)

From 1878 to 1882, Thomas Edison and team developed the electric lighting-and-power system, GE’s first and probably most important innovation. Their goal was large-scale commercial success––not just an invention––by replacing the established indoor gas-lighting system.[1] That gas system delivered an acceptable value proposition––a combination of indoor-lighting experiences and cost that users found satisfactory. To succeed, Edison’s electric system needed to profitably deliver a new value proposition that those users would find clearly superior to the gas system.

Therefore, in 1878-80, Edison’s team––with especially important contributions by the mathematician/physicist Francis Upton––closely studied that gas system, roughly modeling their new electric lighting-and-power system on it. Replacing gas-plants would be “central station” electricity generators; just as pipes distributed gas under the streets, electrical mains would carry current; replacing gas meters would be electricity meters; and replacing indoor gas-lamps would be electric, “incandescent” lamps. However, rather than developing a collection of independent electric products, Edison and Upton envisioned a single system, its components working together to deliver superior indoor-lighting experiences, including its usage and total cost. It was GE’s first value-delivery-system.

Since 1802 when Humphry Davy had first demonstrated the phenomenon of incandescence––a wire heated by electricity could produce light––researchers had tried to develop a working incandescent lamp, to enable indoor electric lighting.Typically enclosed in a glass bulb, two conductors at the lamp’s base ––connected to a power source––were attached to a filament (‘B’ in Edison’s 1880 patent drawing at right). As current flowed, the filament provided resistance; the electricity’s energy overcame the resistance, creating heat and incandescence. To prevent the filament’s combustion, the lamp was a (partial) vacuum.

By 1879, Edison and other researchers expected such electric-lighting, using incandescent lamps, to replace indoor gas-lighting by improving three user-experiences: 1.) lower fire and explosion risks, with minimal shock-risks; 2.) no more “fouling” of the users’ air (gas lights created heat and consumed oxygen); and 3.) higher quality, steadier light (not flickering), with more natural colors.

Yet, a fourth user-experience, lamp-life––determined by filament durability––was a problem. Researchers had developed over twenty lamps, but the best still only had a duration of fourteen hours. Most researchers focused primarily on this one problem. Edison’s team also worked on it, eventually improving lamp durability to over 1200 hours. However, this team uniquely realized that increased durability, while necessary, was insufficient. Another crucial experience was the user’s total cost. Understanding this experience would require analyzing the interactions of the lamps with the electric power.

Of course, market-place success for the electric-lighting system would require that its total costs for users be competitive––close to if not below gas-lighting––and allow the company adequate profit margin. However, the model, using then-common assumptions about incandescent-lamp design, projected totally unaffordable costs for the electric system.

All other developers of incandescent lamps before 1879 used a filament with low resistance. They did so because their key goal was lasting, stable incandescence––and low-resistance increased energy-flow, thus durable incandescence. However, low resistance led to a loss of energy in the conductors, for which developers compensated by increasing the cross-section area of the conductors, and that meant using more copper which was very expensive in large quantities. Citing a 1926 essay attributed to Edison, historian Thomas Hughes quotes Edison on why low-resistance filaments were a major problem:

“In a lighting system the current required to light them in great numbers would necessitate such large copper conductors for mains, etc., that the investment would be prohibitive and absolutely uncommercial. In other words, an apparently remote consideration (the amount of copper used for conductors), was really the commercial crux of the problem.”

Thus, the cost of the electric power would be the key to the user’s total-cost of electric lighting, and that power-cost would be driven most crucially by the lamp’s design. Applying the science of electrical engineering (as discussed in the section on “science-based product-innovation,” later in this Part One), Edison––with key help from Upton–– discovered the most important insight of their project. As Hughes writes in 1979, Edison “…realized that a high-resistance filament would allow him to achieve the energy consumption desired in the lamp and at the same time keep low the level of energy loss in the conductors and economically small the amount of copper in the conductors.” Lamp design was crucial to total cost, not due to the lamp’s manufacturing cost but its impact on the cost of providing electricity to it. What mattered was the whole value-delivery-system.

Rethinking generation, Edison found that dynamos––the existing giant generators––were unacceptably inefficient (30-40%). So, he designed one himself with an efficiency rate exceeding ninety percent. Distribution cost was also reduced by Edison’s clever “feeder-and-main” reconfiguration of distribution lines.

Success, however, would also require individual lamp-control. Streetlamps, the only pre-Edison lighting, were connected in series—a single circuit, which connected the lamps and turned them all on and off at once; and if one lamp burned out, they all went out. Edison knew that indoor lighting would only be practical if each lamp could be turned on or off independently of the others––as gas lamps had always allowed. So, Edison developed a parallel circuit––current flowed around any lamps turned off and allowed each lamp to be turned on or off individually.

As historian Paul Israel writes, “Edison was able to succeed where others had failed because he understood that developing a successful commercial lamp also required him to develop an entire electrical system.” Thus, Edison had devised an integrated value-delivery-system––all components designed to work together, delivering a unifying value proposition. Users were asked to switch indoor-lighting from gas to electric, in return for superior safety, more comfortable breathing, higher light-quality, and equal individual lamp-control, all at lower cost. The value proposition was delivered by high-resistance filaments, more efficient dynamos, feeder-and-main distribution, and lamps connected in parallel. Seeing this interdependence, Edison writes (in public testimony he gave later), as quoted by Hughes:

It was not only necessary that the lamps should give light and the dynamos generate current, but the lamps must be adapted to the current of the dynamos, and the dynamos must be constructed to give the character of current required by the lamps, and likewise all parts of the system must be constructed with reference to all other parts…  The problem then that I undertook to solve was stated generally, the production of the multifarious apparatus, methods, and devices, each adapted for use with every other, and all forming a comprehensive system.

In its first century, GE repeatedly used this strategic model of integration around customer-experiences. However, its electric lighting-and-power system was still incomplete in the late 1880s. Its two components–––first power, then lighting––faced crises soon enough.

In 1886, George Westinghouse introduced high-voltage alternating-current (AC) systems with transformers, enabling long-distance transmission, reaching many more customers. Previously a customer-focused visionary, Edison became internally-driven, resisting AC, unwilling to abandon his direct-current (DC) investment, citing safety––unconvincingly. His bankers recognized AC as inevitable and viewed lagging behind Westinghouse as a growing crisis. They forced Edison to merge with an AC competitor, formally creating GE in 1892 (dropping the name of Edison, who soon left). GE would finally embrace AC.

However, AC was more problematic. Its adoption was slowed by a chronic loss of power in AC motors and generators, due to magnetism. This flaw could only be discovered after the device was built and tested. Then, in 1892, brilliant young German American mathematician and electrical engineer Charles Steinmetz published his law of hysteresis, the first mathematical explanation of magnetism in materials. Now that engineers could minimize such losses while devices were still in design, AC’s use became much more feasible.

In 1893 Steinmetz joined GE. He understood the importance of AC for making electricity more widely available and helped lead GE’s development of the first commercial three-phase AC power system. It was he who understood its mathematics, and with his leadership it would prevail over Westinghouse’s two-phase system.

However, the first generators were powered by reciprocal steam engines and engineers knew that AC needed faster-revolving generators. A possible solution was Thomas Parson’s steam turbine which Westinghouse, after buying its US rights in 1895, was improving. Then in 1896, engineer-lawyer Charles Curtis developed and patented a design using aspects of Parsons’ and others.’ In part, his design made better use of steam-energy.

Yet, when he approached GE, some––including Steinmetz––were skeptical. However, GE––with no answer for the imminent Parsons/Westinghouse turbine––stood to lose ground in the crucial AC expansion. Curtis won a meeting with GE technical director Edwin Rice who, along with CEO Charles Coffin, understood the threat and agreed to back Curtis.

The project struggled, as Westinghouse installed their first commercial steam turbine in 1901. However, GE persisted, adding brilliant GE engineer William Emmet to the team, which in 1902 developed a new vertical configuration saving space and cost. Finally, in 1903 the Curtis turbine succeeded, at only one-eighth the weight and one-tenth the size of existing turbines, yet the same level of power. With its shorter central-shaft less prone to distortion, and lower cost, GE’s value proposition for AC to power-station customers was now clearly superior to that of Westinghouse. After almost missing the AC transition, GE had now crucially expanded the power component of its core electric value-delivery-system.

As of 1900, GE’s electric lighting business––complement to its electric power business––had been a major success. However, with Edison’s patents expiring by 1898, the lighting business now also faced a crisis. GE’s incandescent lamp filaments in 1900 had low efficiency––about 3 lumens per watt (LPW), no improvement since Edison. Since higher efficiency would provide users with more brightness at the same power and electricity-cost, other inventers worked on this goal, using new metal filaments. GE Labs’ first director Willis Whitney discovered that at extremely high temperatures, carbon filaments would assume metallic properties, increasing filament efficiency to 4 LPW. However, then new German tantalum lamps featured 5 LPW, taking leadership from GE, 1902-11.

Meanwhile, filaments emerged made of tungsten, with a very high melting point and achieving a much-better 8 LPW. GE inexpensively acquired the patent for these––but the filaments were too fragile. As the National Academy of Sciences explains, “The filaments were brittle and could not be bent once formed, so they were referred to as ‘non-ductile’ filaments.” Then, GE lab physicist and electrical engineer William Coolidge, hired by Whitney, developed a process of purifying tungsten oxide to produce filaments that were not brittle at high temperatures.

Coolidge’s ductile tungsten was a major success, with efficiency of 10 LPW, and longer durability than competitive metal filaments. Starting in 1910, GE touted the new lamp’s superior efficiency and durability, quickly restoring leadership in the huge incandescent lamp market. This strong position was reinforced in 1913 when another brilliant scientist hired by Whitney, Irving Langmuir, demonstrated doubling the filament’s life span by replacing the lamp’s vacuum with inert gas. The earlier-acquired tungsten-filament patent would protect GE’s lighting position well into the 1930s. Thus, design of the incandescent lamp, and the optimization of the value proposition that users would derive from it, were largely done.

Thus, the company had successfully expanded and completed its electric lighting-and-power value-delivery-system. After initially building that great system, GE had successfully met the AC crisis in the power business, and then the filament-efficiency crisis in lighting. GE had successfully built its stable, key long-term core business. 

GE’s customer-focused product-innovation in medical imaging (1913 and 1970-80)

Like its history in lighting-and-power, GE built and later expanded its customer-focused product innovation in medical imaging, profitably delivering sustained superior value.

Shortly after X-rays were first discovered in 1895, it was found that they pass through a body’s soft tissue, but not hard tissue such as bones and teeth. It was also soon discovered that such X-ray contact with hard tissue would therefore produce an image on a metal film––similarly to visible light and photography. The medical community immediately recognized X-rays’ revolutionary promise for diagnostic use.

However, X-rays and these images were produced by machines known as X-ray tubes, which before 1913 were erratic and required highly skilled operators. The work of GE’s Coolidge, with help from Langmuir, would transform these devices, allowing them to deliver their potential value proposition––efficiently enabling accurate, reliable diagnosis by radiology technicians and physicians. We saw earlier that Edison’s team did not merely focus narrowly on inventing a light-bulb––the incandescent lamp––but rather designing the entire value-delivery-system for electric lighting-and-power. Similarly, Coolidge and Langmuir did not narrowly focus solely on producing X-rays, but on understanding and redesigning the whole medical-imaging value-delivery-system that used X-rays to capture the images.

The basis for X-ray tubes was “vacuum tubes.” First developed in the early 1900s, these are glass cylinders from which all or most gas has been removed, leaving an at-least partial vacuum. The tube typically contains at least two “electrodes,” or contacts––a cathode and an anode. By 1912, potentially useful vacuum tubes had been developed but were unpredictable and unreliable.

X-rays are produced by generating and accelerating electrons in a special vacuum-tube. When the cathode is heated, it emits a stream of electrons that collide with the anode. In thus decelerating, most of the electron’s energy is converted to heat, but about one percent of its energy is converted to X-rays. Then, in 1912 Coolidge, who had improved incandescent lamps with ductile tungsten, suggested replacing the platinum electrodes, then used in X-ray tubes, with tungsten. Tungsten’s high atomic number produced higher-energy X-rays, and its high melting point enabled good performance in the tube’s high-heat conditions.

The early X-ray tubes also had a lot of residual gas, used as source of electrons to generate X-rays. In 1913, however, Langmuir showed the value of “high-vacuum” tubes, i.e., with no residual gas––produced with processes he had used earlier in improving incandescent lamps. Then, he discovered they could get a controlled emission of electrons by using one of Coolidge’s hot tungsten cathodes in a high vacuum. Coolidge quickly put a heated tungsten cathode in an X-ray tube, with a tungsten anode. This “hot cathode, Coolidge Tube” provided the first stable, controllable X-ray generator.                                                                                

Experiences of users––radiologists––were clearly improved dramatically. As Dr. Paul Frame of Oak Ridge Associated Universities writes:

Without a doubt, the single most important event in the progress of radiology was the invention by William Coolidge in 1913 of what came to be known as the Coolidge x-ray tube. The characteristic features of the Coolidge tube are its high vacuum and its use of a heated filament as the source of electrons… The key advantages of the Coolidge tube are its stability and the fact that the intensity and energy of the x-rays can be controlled independently… The high degree of control over the tube output meant that the early radiologists could do with one Coolidge tube what before had required a stable of finicky cold cathode tubes.

GE’s innovation had transformed the value of using X-ray tubes. The same basic design for X-ray tubes is still used today in medicine and other fields. They had not invented the device but thinking through the most desirable user-experiences and redesigning the X-ray value-delivery-system accordingly, they produced the key innovation.

Expanding its 1913 medical-imaging value-delivery-system, nearly seventy years later (1980), GE successfully developed and introduced the first practical, commercial magnetic-resonance-imaging (MRI) machine. Despite important differences between the 1913 and 1980 innovations, they shared much the same fundamental value proposition––enabling more useful medical diagnosis through interior-body images. Without GE’s long previous experience delivering medical-imaging value, its 1980-83 MRI effort––mostly then seen as a low-odds bet––would have been unlikely. In fact, it proved a brilliant natural extension of GE’s X-ray-based value propositions. As Paul Bottomley––leading member of the GE MRI team––exclaims in 2019, “Oh, how far it’s come! And oh, how tenuous MRI truly was at the beginning!”

MRI scans use strong magnetic fields, radio waves, and computer analysis to produce three-dimensional images of organs and other soft tissue. These are often more diagnostically useful than X-ray, especially for some conditions such as tumors, strokes, and torn or otherwise abnormal tissue. MRI introduced a new form of medical imaging, providing a much higher resolution image, with significantly more detail, and avoiding X-rays’ risk of ionizing radiation. While it entails a tradeoff of discomfort––claustrophobia and noise––the value proposition for many conditions is clearly superior on balance.

However, in 1980 the GE team was betting on unproven, not-yet welcome technology. GE and other medical-imaging providers were dedicated to X-ray technology, seeing no reason to invest in MRI. Nonetheless, the team believed that MRI could greatly expand and improve the diagnostic experience for many patients and their physicians while producing ample commercial rewards for GE.

In the 1970s, X-ray technology was advanced via CT scan––computed tomography––which GE licensed and later bought. CT uses combinations of X-ray images, computer-analyzed to assemble virtual three-dimension image “slices” of a body. In the 1970s GE developed faster CT scanning, producing sharper images. The key medical imaging advancement after 1913, however, would be MRI.

The basis for MRI was the physics phenomenon known as nuclear magnetic resonance (NMR), used since the 1930s in chemistry to study the structure of a molecule. Some atoms’ nuclei display specific magnetic properties in the presence of a strong magnetic field. In 1973, two researchers suggested that NMR could construct interior-body images, especially soft tissues––the idea of MRI––later winning them the Nobel Prize.

In the mid-1970s, while earning a Ph.D. in Physics, Bottomley became a proponent of MRI. He also worked on a related technology, MRS––magnetic resonance spectroscopy––but he focused on MRI. Then, in 1980 he interviewed for a job at GE, assuming that GE was interested in his MRI work. However, as he recounts:

I was stunned when they told me that they were not interested in MRI. They said that in their analysis MRI would never compete with X-ray CT which was a major business for GE. Specifically, the GE Medical Systems Division in Milwaukee would not give them any money to support an MRI program. So instead, the Schenectady GE group wanted to build a localized MRS machine to study cardiac metabolism in people.

Bottomley showed them his MRS work and got the job but began pursuing his MRI goal. Group Technology Manager Red Reddington also hired another MRI enthusiast William Edelstein, Aberdeen graduate, who worked with Bottomley to promote an MRI system using a superconducting magnet. The intensity of magnetic field produced by such a magnet––its field strength–––is measured in “T” (Tesla units). GE was then thinking of building an MRS machine with a “0.12 T” magnet. Instead, Bottomley and Edelstein convinced the group to attempt a whole-body MRI system with the highest field-strength magnet they could find. The highest-field quote was 2.0 T from Oxford Instruments who were unsure if they could achieve this unprecedented strength. In Spring of 1982, unable to reach 2.0 T, Oxford finally delivered a 1.5 T––still the standard for most MRI’s today.

By that summer, the 1.5 T MRI produced a series of stunning images, with great resolution and detail. The group elected to present the 1.5 T product, with its recently produced images, at the major radiological industry conference in early 1983. This decision was backed by Reddington and other senior management, including by-then CEO Jack Welch––despite some risk, illustrated by reaction at the conference.

We were totally unprepared for the controversy and attacks on what we’d done. One thorn likely underlying much of the response, was that all of the other manufacturers had committed to much lower field MRI products, operating at 0.35 T or less. We were accused of fabricating results, being corporate stooges and told by some that our work was unscientific. Another sore point was that several luminaries in the field had taken written positions that MRI was not possible or practical at such high fields.

Clearly, however, as the dust settled, the market accepted the GE 1.5T MRI:

Much material on the cons of high field strength (≥0.3 T) can be found in the literature. You can look it up. What is published–right or wrong–lasts forever. You might ask yourself in these modern times in which 1.5 T, 3 T and even 7 T MRI systems are ordinary: What were they thinking? … All of the published material against high field MRI had one positive benefit. We were able to patent MRI systems above a field strength of 0.7 T because it was clearly not “obvious to those skilled in the art,” as the patent office is apt to say.

In 2005, recognizing Edelstein’s achievements, the American Institute of Physics said that MRI “is arguably the most innovative and important medical imaging advance since the advent of the X-ray at the end of the 19th century.” GE has led the global Medical Imaging market since that breakthrough. As Bottomley says, “Today, much of what is done with MRI…would not exist if the glass-ceiling on field-strength had not been broken.”

*   *   *

Thus, as it had done with its electric lighting-and-power system, GE built and expanded its value-delivery-system for medical-imaging. In 1913, the company recognized the need for a more diagnostically valuable X-ray imaging system––both more stable and efficiently controllable. GE thus innovatively transformed X-ray tubes––enabling much more accurate, reliable, efficient medical diagnosis of patients for physicians and technicians.

Later, the GE MRI team’s bet on MRI in 1980 had looked like a long-shot. However, that effort was essentially an expansion of the earlier X-ray-based value proposition for medical imaging. The equally customer-focused MRI bet would likewise pay off well for patients, physicians, and technicians. For many medical conditions, these customers would experience significantly more accurate, reliable, and safe diagnoses. This MRI bet also paid off for GE, making Medical Imaging one of its largest, most profitable businesses.

Science-based product innovation

We started this Part One post by suggesting that GE applied three strategic principles in its first century––though later largely abandoned––starting with the above-discussed principle of customer-focused product-innovation. Now we turn to the second of these three principles––science-based product innovation.

In developing product-innovations that each helped deliver some superior value proposition, GE repeatedly used scientific knowledge and analysis. This science-based perspective consistently increased the probability of finding solutions that were truly different and superior for users. Most important GE innovations were either non-obvious or not possible without this scientific perspective. Thus, to help realize genuinely different, more valuable user-experiences, and to make delivery of those experiences profitable and proprietary (e.g., patent-protected), GE businesses made central use of science in their approach to product innovation.

This application included physics and chemistry, along with engineering knowledge and skills, supported by the broad use of mathematics and data. It was frequently led by scientists, empowered by GE to play a leading, decision-making role in product innovation. After 1900, building on Edison’s earlier product-development lab, GE led all industry in establishing a formal scientific laboratory. It was intended to advance the company’s knowledge and understanding of the relevant science, but primarily focused on creating practical, new, or significantly improved products.

Throughout GE’s first century, its science-based approach to product innovation was central to its success. After 1980, of course, GE did not abandon science. However, it became less focused on fundamental product-innovation, more seduced by the shorter-term benefits of marginal cost and quality enhancement, and thus retreated from its century-long commitment to science-based product-innovation. Following are some examples of GE’s success using that science-based approach most of its first century.

*   *   *

Edison was not himself a scientist but knew that his system must be science-based, with technical staff. As Ernest Freeberg writes:

What made him ultimately successful was that he was not a lone inventor, a lone genius, but rather the assembler of the first research and development team, at Menlo Park, N.J.

Edison built a technically-skilled staff. Most important among them was Francis Upton, Princeton/Berlin mathematician-physicist. Edison had a great use for scientific data and expertise—Upton had studied under physicist Hermann von Helmholtz, and now did research and experimentation on lamps, generators and wiring systems. He was invaluable to Edison by, as Hughes describes, “bringing scientific knowledge and methods to bear on Edison’s design ideas.”

Another lab assistant later described Upton as the thinker and “conceptualizer of systems”:

It was Upton who coached Edison about science and its uses in solving technical problems; it was Upton who taught Edison to comprehend the ohm, volt, and weber (ampere) and their relation to one another. Upton, for instance, laid down the project’s commercial and economic distribution system and solved the equations that rationalized it. His tables, his use of scientific units and instruments, made possible the design of the system.

As discussed earlier, the key insight for the lighting-and-power project was the importance of using high-resistance filaments in the incandescent lamps. Edison only got to this insight with Upton’s crucial help. Upton had helped Edison understand Joules’ and Ohm’s fundamental laws of electronics, which show the relationships between current and resistance. They needed to reduce the energy lost; using these relationships, they saw from Joule’s law that they could reduce the energy lost if, instead of using more copper, they reduced the current. Bringing in Ohm’s law––resistance equals voltage divided by current––meant that increasing resistance, at a given level of voltage, would proportionately reduce current. As historian David Billington explains[2]:

Edison’s great insight was to see that if he could instead raise the resistance of the lamp filament, he could reduce the amount of current in the transmission line needed to deliver a given amount of power. The copper required in the line could then be much lower.

Thus, as Israel continued:

Edison realized that by using high-resistance lamps he could increase the voltage proportionately to the current and thus reduce the size and cost of the conductors.

These relationships of current to resistance are elementary to an electrical engineer today, but without Upton and Edison thoughtfully applying these laws of electronics in the late 1870s, it is doubtful they would have arrived at the key conclusion about resistance. Without that insight, the electric lighting-and-power project would have failed. Numerous factors helped the success of that original system, and that of the early GE. Most crucial, however, was Edison and Upton’s science-based approach.

That approach was also important in other innovations discussed above, such as:

  • Steinmetz’ law of hysteresis and other mathematical and engineering insights into working with AC and resolving its obstacles
  • Langmuir’s scientific exploration and understanding of how vacuum tubes work, leading to the high-vacuum tube and contributing to the Coolidge X-ray tube
  • Coolidge’s developed deep knowledge of tungsten and his skill in applying this understanding to both lamp filaments and then design of his X-ray tube
  • And as Bottomley writes, the MRI team’s “enormous physics and engineering effort that it took to transform NMR’s 1960’s technology–which was designed for 5-15 mm test tube chemistry–to whole body medical MRI scanners”

Another example (as discussed in the section on “synergistic product-innovation,” later in this Part One), is the initial development of the steam turbine for electric power generation, and then its later reapplication to aviation. That first innovation––for electric power––obviously required scientific and engineering knowledge. The later reapplication of that turbine technology to aviation was arguably an even more creative, complex innovation. Scientific understanding of combustion, air-compression, thrust, and other principles had to be combined and integrated.

Finally, the power of GE’s science-based approach to product innovation was well illustrated in its plastics business. From about 1890 to the 1940s, GE needed plastics as electrical insulation––non-conducting material––to protect its electrical products from damage by electrical current. For the most part during these years, the company did not treat plastics as an important business opportunity, or an area for major innovation.

However, by the 1920s the limitation of insulation was becoming increasingly problematic. Many plastics are non-conductive because they have high electrical resistance, which decreases with higher temperature. As power installations expanded and electrical motors did more work, they produced more heat; beyond about 265˚ Fahrenheit (later increased to about 310˚ F), insulation materials would fail. Equipment makers and customers had to use motors and generators made with much more iron and copper––thus higher cost––to dissipate the heat.

Then, in 1938-40, brilliant GE chemist Eugene Rochow created a clearly superior, GE-owned solution to this high-temperature insulation problem––a polymer raising the limit to above 356˚ F. It also proved useful in many other applications––sealants, adhesives, lubricants, personal-care products, biomedical implants, and others, and thus became a major business for GE.

Rochow’s success was the first major application of a rigorously science-based approach to product-innovation in GE plastics. This breakthrough traced to insights resulting from his deep knowledge of the relevant chemistry and his application of this understanding to find non-obvious solutions. This success gave GE confidence to take plastics-innovation more seriously, going well beyond insulation and electrical-products support. The science-based approach would be used again to create at least three more new plastics.

In 1938, Corning Glass Works announced a high-temperature electrical insulator called silicone––pioneered years earlier by English chemist Frederick Kipping. Corning hoped to sell glass fibers in electrical insulation. They asked GE––with whom they had previously collaborated––to help them find a practical manufacturing method for the new polymer. Based on this history, they would later complain that Rochow and GE had stolen their innovation.

In reality, Rochow thought deeply about the new polymer, and with his science-based approach realized that it had veryhigh-temperature insulation-potential. However, he also concluded that Corning’s specific design would lose its insulating strength with its––very likely––extended exposure to very high temperatures. Rochow then designed a quite different design for silicone that avoided this crucial flaw.

To illustrate Rochow’s science-based approach, following is a partial glimpse of his thought process. He was an inorganic chemist (unlike the mostly organic chemists in GE and Corning). A silicone polymer’s “backbone” chain is inorganic––alternating silicon and oxygen atoms, no carbon. As was desirable, this structure helps make silicones good insulators, even at high temperatures. However, for product-design flexibility, most silicones are also partly organic––they have what are called organic “side groups” of carbon atoms, attached to silicon atoms. The Kipping/Corning’s side groups were known as ethyl phenyl. As Rochow recounted in his 1995 oral history:

I thought about it. “Ethyl groups—what happens when you heat ethyl phenyl silicone up to thermal decomposition [i.e., expose it to very high temp]? Well, it carbonizes. You have carbon-carbon bonds in there… You have black carbonaceous residue, which is conducting. [i.e., the silicone will lose its insulating property]. How can we avoid that? Well, by avoiding all the carbon-carbon bonds. What, make an organosilicon derivative [e.g., silicone] without any carbon-carbon bonds? Sure, you can do it. Use methyl. Kipping never made methyl silicone. Nobody else had made it at that time, either. I said, in my stubborn way, ‘I’m going to make some methyl silicone.’”

Rochow did succeed in making methyl silicone and demonstrated that it delivered a clearly superior value proposition for high-temperature electrical insulating. In contrast to the Corning product, this GE silicone maintained its insulating performance even after extended high-temperature exposure. Corning filed patent interferences, losing on all five GE patents. It also proved valuable in other applications––sealants, adhesives, lubricants, personal-care products, biomedical implants, and others. It took ten years to be profitable, but silicones became a major category, making GE a leading plastics maker. The same fundamentally science-based approach applied by Rochow would pay off again repeatedly with new plastics innovations in the rest of GE’s first century.

In 1953 chemist Daniel Fox, working to improve GE’s electrical-insulating wire enamel, discovered polycarbonates (PC). This new plastic showed excellent properties[3]––more transparent than glass, yet very tough––thus, it looked like acrylic but was much more durable. It was also a good insulator, heat resistant, and able to maintain properties at high temperatures––and became a very large-volume plastic.[4]

This discovery is often described as accidental. Luck was involved, as Fox had not set out intending to invent a material like PC. However, his science-based instinct led to the discovery. PC were in the “carbonates” category, long known to easily hydrolyze––break down by reaction with water; thus, researchers had given up on carbonates as unusable. Fox, however, remembered that as a graduate student he had once encountered a particular carbonate that, as part of an experiment, he needed to hydrolyze––yet, hard as he tried, he could not induce it to hydrolyze. Following that hint, he explored and found, surprisingly, that PC indeed do not hydrolyze; thus, they could be viable as a new plastic.

Asked in 1986 why no one else discovered PC before him, the first reason he gave was that “…everyone knew that carbonates were easily hydrolyzed and therefore they wouldn’t be the basis for a decent polymer.” Fox described himself as an “opportunist.” Not necessarily following rote convention, he was open to non-obvious routes, even ones that “everyone knew” would not work. Fox recounted how one of his favorite professors, “Speed” Marvel at Illinois, “put forth a philosophy that I remember”:

For example, he [Marvel] told the story of a couple of graduate students having lunch in their little office. One of them ate an apple and threw the apple core out the window. The apple core started down and then turned back up[5]. So, he made a note in his notebook, “Threw apple core out the window, apple core went up,” and went back to reading his magazine. The other man threw his banana peel out the window. It started down and then it went up. But he almost fell out of the window trying to see what was going on. Marvel said, “Both of those guys will probably get Ph.D.’s, but the one will be a technician all of his life.”

I remember that. First you must have all the details, then examine the facts. He that doesn’t pursue the facts will be the technician, Ph.D. or not.

Fox’s insightful, if lucky, discovery proved to be a watershed for GE. Following the major silicones and PC successes, GE was finally convinced that it could––and should aggressively try to––succeed in developing major new plastics, even competing against the chemical giants. Applying the same science-based approach, GE produced the first synthetic diamonds and later, two additional major new products––Noryl and Ultem. These two made important improvements in high heat-resistance, among other valuable experiences, at lower costs than some other high performance plastics.

GE thus helped create today’s plastics era, which delivers low costs and functional or aesthetic benefits. On the downside, as the world has increasingly learned, these benefits are often mixed with an unsettling range of environmental/safety threats. The company was unaware of many of these threats early in its plastics history. Still, it regrettably failed to aggressively apply its science-based capabilities to uncover––and address––some of these threats earlier, at least after 1980. Nonetheless, during GE’s first century, its science-based approach was instrumental in its great record of product innovation.

Of course, most businesses today use science to some degree and claim to be committed to product (or service) innovation. However, many of these efforts are in fact primarily focused on marginal improvements––in costs, and in defects or other undesired variations––but not on fundamental improvements in performance of great value to customers. After its first century, GE––as discussed later in Parts Two-Four of this series––would follow that popular if misguided pattern, reducing its emphasis on science-based breakthrough innovation, in favor of easier, but lower-impact, marginal increases in cost and quality. Prior to that shift, however, science-based product-innovation was a second key aspect––beyond its customer-focused approach––of GE’s first-century product innovation.

Synergistic product innovation

Thus, the two key strategic principles discussed above were central to GE’s product-innovation approach in that first century––it was customer-focused and science-based. An important third principle was that, where possible, GE identified or built, and strengthened synergies among businesses. These were shared strengths that facilitated product-innovation of value to customers.

In GE’s first century, all its major businesses––except for plastics––were related to and synergistically complementary with one or more other generally electrically-rooted GE businesses. Thus, though diverse, its (non-plastics) businesses and innovations in that first era were related via electricity or other aspects of electromagnetism, sharing fundamental customer-related and science-based knowledge, and understanding. This shared background and perspective enabled beneficial, synergistically shared capabilities. These synergies included sharing aspects of the businesses’ broadly defined value propositions and sharing some technology for delivering important experiences in those propositions.

First and perhaps most obvious example of this electrical-related synergy was the Edison electric lighting-and-power system. GE’s providing of electrical power was the crucial enabler of its lighting, which in turn provided the first purpose and rationale for the power. Later, expanding this initial synergistic model, GE would introduce many other household appliances, again both promoting and leveraging the electric power business. These included the first or early electric fan, toaster, oven, refrigerator, dishwasher, air conditioner, and others.

Many of these GE products, more than major breakthrough-innovations, were fast-followers of other inventors eager to apply electricity. An exception, Calrod––electrically-insulating but heat-conducting ceramic––made cooking safer. Without a new burst of innovation, GE appliances eventually became increasingly commoditized. Perhaps that decay was a failure of imagination more than inevitable in the face of Asian competition as many would claim later. In any case, appliances were nonetheless a large business for all of GE’s first century, thanks to its original synergies.

Another example of successful innovation via electrical or electromagnetic synergy was GE’s efforts with radio. The early radio transmitters of the 1890s generated radio-wave pulses, only able to convey Morse code dots-and-dashes––not audio. In 1906, recognizing the need for full-audio radio, a GE team developed what they termed a “continuous wave” transmitter. Physicists had suggested that an electric alternator––a device that converts DC to AC––if run with high enough cycle-speed, would generate electromagnetic radiation. It thus could serve as a continuous-wave (CW) radio transmitter.

GE’s alternator-transmitter seemed to be such a winner that the US government in 1919 encouraged the company’s formation of Radio Corporation of America––to ensure US ownership of this important technology. RCA started developing a huge complex for international radio communications, based on the giant alternators. However, this technology was abruptly supplanted in the 1920s by short-wave signals’ lower transmission cost, enabled by the fast-developing vacuum-tube technology. Developing other radio-related products, RCA still became a large, innovative company, but anti-trust litigation forced GE to divest its ownership in 1932.

Nonetheless, the long-term importance of GE’s CW innovation was major. This breakthrough––first via alternators, then vacuum-tube and later solid-state technology––became the long-term winning principle for radio transmission. Moreover, GE’s related vacuum-tube contributions were major, including the high-vacuum. And, growing out of this radio-technology development, GE later made some contributions to TV development.

However, beyond these relatively early successes, a key example of great GE synergy in that first century is the interaction of power and aviation. Major advances in flight propulsion––creating GE’s huge aviation business––were enabled by its earlier innovations, especially turbine technology from electric power but also tungsten metallurgy from both lighting and X-ray tubes. Moreover, this key product-innovation synergy even flowed in both directions, as later aviation-propulsion innovations were reapplied to electric power.

Shortly after steam turbines replaced steam engines for power generation, gas turbines emerged. They efficiently converted gas into great rotational power, first spinning an electrical generator. During the First World War, GE engineer Sanford Moss invented a radial gas compressor, using centrifugal force to “squeeze” the air before it entered the turbine. Since an aircraft lost half its power climbing into the thin air of high altitudes, the US Army heard of Moss’s “turbosupercharger” compressor and asked for GE’s help. Moss’ invention proved able to sustain an engine’s full power even at 14,000 feet altitude. GE had entered the aviation business, producing 300,000 turbo superchargers in WWII. This success was enabled by GE’s knowledge and skills from previous innovations. Austin Weber in a 2017 article quotes former GE exec Robert Garvin:

“Moss was able to draw on GE’s experience in the design of high rotating-speed steam turbines, and the metallurgy of ductile tungsten and tungsten alloys used for its light bulb filaments and X-ray targets, to deal with the stresses in the turbine…of a turbo supercharger.”

GE’s delivery of a winning value proposition to aviation users––focused on increased flight-propulsion power, provided by continued turbine-technology improvements––would continue and expand into the jet engine era. Primary focus would be on military aviation, until the 1970s when GE entered the commercial market. In 1941, Weber continues, the US and British selected GE to improve the Allies’ first jet engine––designed by British engineer Frank Whittle. GE was selected because of its “knowledge of the high-temperature metals needed to withstand the heat inside the engine, and its expertise in building superchargers for high-altitude bombers, and turbines for power plants and battleships.”

In 1942, the first US jet was completed with a GE engine I-A, supplanted in 1943 by the first US production jet engine––J31. Joseph Sorota, on GE’s original team, recounted the experience, explaining that the Whittle engine “was very inefficient. Our engineers developed what now is known as the axial flow compressor.” This compressor is still being used in practically every modern jet engine and gas turbine today. In 1948, as GE Historian Tomas Kellner writes, GE’s Gerhard Neumann improved the jet engine further, with the “variable stator”:

It allowed pilots to change the pressure inside the turbine and make planes routinely fly faster than the speed of sound.

The J47 played a key role as engine for US fighter jets in the Korean war and continued important for military aircraft for another ten years. It became the highest volume jet engine, with GE producing 35,000 engines. The variable stator innovation resulted in the J79 military jet engine in the early 1950s, followed by many other advances in this market.

In the 1970s, the company successfully entered the commercial aviation market, and later became a leader there as well. Many improvements followed, with GE aviation by 1980 reaching about $2.5 billion (about 10% of total GE).

GE’s application to aviation––of turbine and other technology first developed for the electric power business––brilliantly demonstrated the company’s ability to capture powerful synergies, based on shared customer-understanding and technology. Then in the 1950s as Kellner continues:

The improved performance made the aviation engineers realize that their variable vanes and other design innovations could also make power plants more efficient.

Thus, GE reapplied turbine and other aviation innovations––what it now terms aeroderivatives––to strengthen its power business. The company had completed a remarkable virtuous cycle. Like symbiosis in biology, these synergies between two seemingly unrelated businesses were mutually beneficial.

*   *   *

Therefore, in its first century GE clearly had great success proactively developing and using synergies tied to its electrical and electromagnetic roots. As mentioned at the outset of this Part One post, this synergistic approach was the third strategic principle––after customer-focused and science-based––that GE applied in producing its product-innovations during that era. However, while the dominant characteristic of GE’s first century was its long line of often-synergistic product-innovations, that does not mean that GE maximized all its opportunities to synergistically leverage those key roots.

Perhaps an early, partial example was radio, where GE innovated importantly but seemed to give up rather easily on this technology, after being forced to divest RCA. Instead, GE might well have moved on and tried to recapture the lead in this important electrical area.

Then, after 1950 GE again folded too easily in the emerging semiconductor and computer markets. Natural fields for this electrical-technology leader, these markets would be gateways to the vastly significant digital revolution. In the 1970s, or later under Welch, GE still could have gotten serious about re-entering these important markets. It’s hard to believe that GE could not have become a leading player in these if it had really tried. Moreover, beyond missing some opportunities for additional electrically-related synergies, GE also drifted away from strategic dependence on such synergies, by developing plastics. Profitable and rapidly growing, plastics became GE’s sole business not synergistically complementary with its traditional, electrically-related ones.

These probably missed opportunities, and modest strategic drifting, may have foreshadowed some of GE’s future, larger strategic missteps that we will begin to explore in Part Two of this series. Thus, we’ve seen that GE displayed some flaws in its otherwise impressive synergistic approach. Overall, nonetheless, the company’s product-innovation approach and achievements in its first century were largely unmatched. Although later essentially abandoned, this value-delivery-driven model should have been sustained by GE and still merits study and fresh reapplication in today’s world.

In any case, however, by the 1970s GE needed to rethink its strategy. What would be the basis for synergistically connecting the businesses, going forward? How should the company restructure to ensure that all its non-plastics businesses had a realistic basis for mutually shared synergies, allowing it to profitably deliver superior value?

In answering these questions, GE should have redoubled its commitment to customer-focused, science-based product-innovation, supported by synergies shared among businesses. Using such a strategic approach, although GE might have missed some of the short-term highs of the Welch era, it could have played a sustained role of long-term leadership, including in its core energy markets. However, as GE’s first century wound down, this astounding company––that had led the world in creative, value-delivering product-innovation––seemed to lack the intellectual energy to face such questions again.

Instead, as we will see, the company seemed to largely abandon the central importance of product-innovation, growing more enamored of service––profitably provided on increasingly commoditized products. GE would pursue a series of strategic illusions––believing that it controlled its own destiny, as Jack Welch urged––but would ultimately fail as the company allowed the powerful product-innovation skills of its first century to atrophy. So, we turn next to the Welch strategy––seemingly a great triumph but just the first and pivotal step in that tragic atrophy and decline. Thus, our next post of this GE series will be Part Two, covering 1981-2010––The Unsustainable Triumph of the Welch Strategy.

 

Footnotes––GE Series Part One:

[1] By this time, some street lighting was provided by electric-arc technology, far too bright for inside use

[2] David P. Billington and David P. Billington Jr., Power, Speed, and Form: Engineers and the Making of the Twentieth Century (Princeton: Princeton University Press, 2006) 22

[3] But unfortunately, is made with bisphenol A (BPA) which later became controversial due to health issues

[4] Discovered the same time by Bayer, the two companies shared the rewards via cross-licensing

[5] Presumably meaning it stopped and flew back up, above the window

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

GE’s Illusions of Destiny-Controlled…& the World’s Real Losses

Introductory Overview to Value Delivery in the Rise-and-Decline of General Electric

 Four-Part Series on our Value-Delivery blog

By Michael J. Lanning––Copyright © 2021 All Rights Reserved

 

We contend that now, more than ever, business leaders need strategies that profitably deliver superior value to customers. The power of applying––and downside of neglecting––these principles of value delivery are extremely well illustrated by studying the growth and decline of General Electric.

Jack Welch, GE’s legendary CEO––1981-2001––famously urged  businesses to, “Control your own destiny.” By 2001, GE had become the world’s most valued and admired company, apparently fulfilling Welch’s vision of destiny-controlled. However, in retrospect, the company’s greater and more lasting achievements came in the prior, partially forgotten first century––1878-1980. Using principles of value-delivery strategy, GE built a corporate giant on a base of great product-innovation.

Then, however, despite its much celebrated, apparent great triumph, the Welch strategy and its continuation under successor Jeff Immelt largely abandoned those strategic principles. Instead, the company chased self-deceiving illusions, ultimately losing control of its destiny, and declining precipitously––though stable now. The four-part Value-Delivery-Blog series introduced here aims to identify the still-relevant key principles of GE’s historic strategy that, in its first century, made it so important––but which, by their later neglect and decay, eventually led to its major decline and missed-opportunities.

At the end of this Introductory Overview, the reader can use a link that is provided to Part One.

Note from the author: special thanks to colleagues Helmut Meixner and Jim Tyrone for tremendously valuable help on this series.

Part One: Customer-Focused, Science-Based, and Synergistic Product-Innovation (1878-1980)

GE built a corporate giant in its first century via major product-innovations that profitably delivered real value to the world. These innovations followed three principles of value-delivery strategy, discussed in Part One––they were customer-focused, science-based, and where possible, powerfully synergistic among GE businesses.

 

Following the first, most fundamental of these three principles, GE innovations in that first century were customer-focused––integrated around profitably delivering superior value[1] to customers. Thus, GE businesses delivered winning value propositions––superior combinations of customer-experiences, including costs. We term a business managed in this way a “value-delivery-system,” apt description for most GE businesses of that era.

Part One discusses two key examples of these GE value-delivery-systems successfully created and expanded in that century after 1878. First is the company’s electric lighting-and-power value-delivery-system. It included incandescent-lamps––light bulbs––and the supply of electricity to operate the lighting. Edison and team studied indoor-lighting users’ experiences––with gas-lighting––and designed ways to improve those experiences.

Thus, users were offered a superior value proposition––proposing that they replace their gas-lighting system with Edison’s electric one. As discussed in Part One, users in return got a superior set of experiences, including greater safety, more comfortable breathing––the air no-longer “fouled” by gas––with higher-quality light, shorter-but-adequate lamp-life, and equally convenient operation, all at lower total cost than gas lighting. Profitably delivering this value proposition was enabled by key elements of lamp-design, more efficient electricity generation-and-distribution, and other important inventions.

This brilliant, integrated system was GE’s founding and most important innovation. Aside from the lamps, consider just electric-power. Discussing the world’s infrequent, truly-breakthrough advances in technology, historian of science Vaclav Smil writes

And there has been no more fundamental, epoch-making modern innovation than the large-scale commercial generation, transmission, distribution, and conversion of electricity.… the electric system remains Edison’s grandest achievement: an affordable and reliably available supply of electricity has opened doors to everything electrical, to all great second-order innovations ranging from gradually more efficient lighting to fast trains, from medical diagnostic devices to refrigerators, from massive electrochemical industries to tiny computers governed by microchips.

Then, in the first fifteen years of the twentieth century, the company expanded this electric lighting-and-power value-delivery-system. Its new power-innovations made electricity more widely available by helping enable long-distance transmission. Its new lighting innovations produced major advances in lamp efficiency.

A second key GE value-delivery-system discussed in Part One is medical-imaging. Starting with advances to the X-ray tube, GE much later develops the first practical magnetic-resonance-imaging (MRI) machine. These innovations delivered a superior value proposition––first making diagnoses more accurate and reliable, and later, via MRI, safer.

In the great product-innovations of its first century, GE’s first strategic principle was to be customer-focused. As a result, it repeatedly created winning value-delivery-systems––integrating activity around profitable delivery of superior value propositions.

Its second strategic principle was taking a science-based approach to its product-innovation efforts. In that era, GE’s major product-innovations were typically led by highly-knowledgeable physicists, chemists, or other scientists, making extensive use of mathematics and data. The company led all industry in establishing a formal laboratory devoted to scientific research in the context of developing practical, valuable products. This science-based approach increased the likelihood that GE’s product-innovations would result in truly different, meaningfully superior customer experiences. It also helped make delivery of these experiences profitable and proprietary (e.g., patent-protected) for the company.

Rigorous application of scientific theory and data enabled many GE innovations. Edison’s lighting-and-power project developed its key insight––concerning electrical-resistance––by using the early science of electronics. Other examples of the importance of GE’s science-based approach included the work enabling alternating current (AC), improvements to the X-ray tube, use of tungsten-knowledge to improve lamps, and later the extensive physics and engineering work to make MRI a practical reality. Notably, it also transformed GE Plastics–––from providing secondary support for the electrical businesses, to a major business––which we also discuss in Part One.

GE also applied a third key principle––where possible, it identified or built, then reinforced, powerful synergies among its businesses. Based on reapplication of technologies and customer-insights from one GE business to another, these were shared strengths, enabling new GE innovations, further profitably delivering superior value.

In its first century, GE had great success developing synergies related to its electrical and electromagnetic roots. All its major businesses––except plastics––shared these electrical roots. They thus synergistically shared valuable knowledge and insights that were customer-related and science-based. The company’s founding innovation––the Edison lighting-and-power value-delivery-system––was inherently synergistic. Power enabled lighting; lighting justified and promoted power. GE later emulated this synergy by developing its household-appliance business. As discussed in Part One, perhaps the company’s greatest example of synergies between seemingly unrelated businesses was the sharing of turbine technology, first developed for power, then reapplied to aviation.

GE would also fail to realize some synergistic opportunities tied to its electrical roots. Most surprisingly, the company gave up easily on the crucially important markets for semi-conductors and computers, while embracing plastics, its first major non-electrical business. This subtle drifting away from its electrical roots foreshadowed GE’s later, ultimately problematic focus on non-electrical businesses––especially the financial kind. Still, GE’s first century was a tour de force of value-delivery-driven product innovation. After 1981, Welch and team should have better understood and reinvigorated those past strengths.

Part Two: Unsustainable Triumph of the Welch strategy (1981-2010)

For over 20 years after 1981, GE triumphed with Jack Welch’s radical new strategy that included a significant reliance on financial services, a strategy continued by Welch successor Jeff Immelt until about 2010. As discussed in Part Two, this complex strategy ingeniously melded some strengths of GE’s huge and reliably profitable traditional businesses with its aggressive, new financial ones. It enabled the latter to benefit uniquely from GE’s low corporate cost-of-borrowing––due to its huge, profitable, and exceptionally credit-worthy traditional businesses.

This hybrid strategy was largely not available to financial competitors as they lacked GE’s huge traditional product businesses. Thus, with this crucial cost advantage, the new GE financial businesses could profitably deliver a clearly superior value proposition to financial customers. Therefore, these financial businesses grew dramatically and profitably in the Welch era, leading GE’s overall exceptional profitable growth, making GE the world’s most valued and admired corporation by 2001. Yet, there were longer-term flaws hidden in this triumphant strategy.

*   *   *

Through our value-delivery lens we see many businesses’ strategies as what we term “internally-driven.” Managers of an internally-driven business think inside-out, deciding what products and services to offer based on what they believe their business does best, or best leverages its assets and “competencies.” They are driven by their own internal agenda, more than what customers would potentially value most. Even though GE’s financial businesses delivered a profitable, superior value proposition for two decades, the overall Welch strategy increasingly became internally-driven.

Instead of insisting on new product-innovation in the traditional industrial businesses, the old core businesses were used more to help enable the rapid, highly profitable growth of the financial business. Moreover, the hybrid strategy’s advantages and resulting growth were inherently unsustainable much past 2000. Its heady successes likely led Welch’s and later Immelt’s teams to embrace the illusion that GE and they could master financial or any other business, no matter how new and complex. However, the high returns in GE Capital (GEC, its finance business) reflected high risk, not just financial innovation, and higher risk than GE fully understood. These risks very nearly wrecked the company in 2008-09.

In addition, as will be discussed in Part Two, the unique cost advantage of the GE financial businesses (GEC) could not drive much-faster growth for GEC than the non-financial businesses indefinitely. Given the accounting and regulatory rules of the time, GEC’s cost advantage was only available if GEC’s revenues remained less than fifty-percent of total GE. That threshold was reached in 2000. After that point, GE either needed to reduce GEC’s growth rate, or dramatically increase the growth rate for the traditional businesses. Welch was likely aware of this problem in 2000 when he tried to make the gigantic acquisition of Honeywell, but the European regulators rejected this move, on antitrust terms. The limitations of the strategy, and pressures on it, were clearly emerging by 2000.

Thus, the Welch and then Immelt teams needed to replace the GEC-centered strategy, creating a new surge of product-innovation in the non-financial businesses, in order to grow the aging company. Complicating this challenge, however, GE’s internally-driven instinct focused increasingly on financial engineering, efficiency, and optimization. Eventually, its historically crucial capabilities of customer-focused, science-based product innovation, with shared synergies, all weakened. GE’s industrial product-innovation capabilities had atrophied. As USC School of Business Gerard Tellis writes:

Ironically, GE rose by exploiting radical innovations: incandescent light bulbs, electric locomotives, portable X-ray machines, electric home appliances, televisions, jet engines, laser technologies, and imaging devices. Sadly, in the last few decades, that innovation-driven growth strategy gave way to cash and efficiency-focused acquisitions.

R&D spending continued, but the historical science-based focus on product-innovation breakthroughs gave way to a focus on cost, quality and information technology (IT). Moreover, a new synergy was achieved between the traditional and financial businesses, but the historic emphasis on synergy in product-innovation declined.

Welch’s high-octane strategy was not only flawed by being an internally-driven approach but was also unsound in that it was what we term “customer-compelled.” Again, through our value-delivery lens, we see many businesses trying to avoid the obvious risk of internally-driven thinking––the risk of neglecting or even ignoring what customers want. Therefore, they often follow a well-meaning but misguided customer-compelled strategic map, just as GE did during and after Welch. They strive to “be close and listen” to customers, promising “total satisfaction,” but they may fail to discover what experiences customers would most value, often because customers do not know what they would most value. That logical error was a limitation in one of Welch’s key, most widely influential initiatives––the Six Sigma model.

This demanding, data-intensive technique can be very powerful in reducing variances in product performance, highly useful in achieving manufactured products (or services) that much more precisely meet customers’ requested performance. However, it is only effective if we understand correctly what the most valuable dimensions of that performance are. Otherwise, we only get a customer-compelled result, not a very valuable result for the customer. Moreover, the Six Sigma approach could also be used in an internally-driven way. A business may select a dimension of performance for which it knows how to reduce variance, but which may not be a dimension of highest value to the customer. The technique will then lead again to less variance, but not more value for the customer.

Welch was sometimes hailed not only for his––ultimately too clever––strategic hybrid of the industrial and financial businesses. He was also lionized, until the recent years of GE’s decline, for his major emphasis on efficiency, reducing cost and increasing marginal quality, including Six Sigma. These initiatives achieved meaningful cost reduction––a part of delivering value to customers. But by the 2010s (after the financial crisis) many business thinkers changed their tone. In 2012, for example, Ron Ashkenas in HBR writes:

Six Sigma, Kaizen, and other variations on continuous improvement can be hazardous to your organization’s health. While it may be heresy to say this, recent evidence from Japan and elsewhere suggests that it’s time to question these methods… Looking beyond Japan, iconic six sigma companies in the United States, such as Motorola and GE, have struggled in recent years to be innovation leaders. 3M, which invested heavily in continuous improvement, had to loosen its sigma methodology in order to increase the flow of innovation… As innovation thinker Vijay Govindarajan says, “The more you hardwire a company on total quality management, [the more] it is going to hurt breakthrough innovation. The mindset that is needed, the capabilities that are needed, the metrics that are needed, the whole culture that is needed for discontinuous innovation, are fundamentally different.”

In 2000 Jeff Immelt took over as GE CEO. During 2000-2007, Immelt’s GE increased R&D and produced incremental improvements in product cost, quality, and performance, and advances in IT. However, GE needed to return, more fundamentally, to its historical strengths of customer-focused, science-based product-innovation. The company needed to fundamentally rethink its markets and its value-delivery-systems––deeply studying the evolution of customer priorities and the potential of technology and other capabilities to meet those priorities.

They might have discovered major new potential value propositions, and implemented new, winning value-delivery-systems, perhaps driven again by product-innovation as in GE’s first century. However, Immelt and team seemed to lack the mindset and skills for strategic exploration and reinvention. They sensed a need to replace the Welch strategy but depended on GEC for profits. They thus proved, not surprisingly, too timid to act until too late, resulting in near-fatal losses in the 2008 financial crisis. This flirting with disaster was followed by a decade of seemingly-safe strategies, lacking value-delivery imagination.

Part Three: Digital Delusions(2011-18)

Thus, we saw that the Welch finance-focused strategy which seemed so triumphant in 2001 turned out to be tragically flawed, longer-term. It had led the company to largely abandon its earlier, historical commitment to value-delivery-based product-innovation in the traditional, non-financial businesses. In addition, the shiny new finance-focused strategy was riskier than the company understood, until it was too late. Finally, the strategy’s apparent triumph must have encouraged the Welch and Immelt teams to believe they could succeed in any business. That belief was nurtured by some successes without electrical roots––especially GE Plastics, and perhaps NBC––and further by the Management Training program in Crotonville. The myth of GE invincibility in any business would be dismantled, first by the financial crisis but then further in the years after 2011.

Once the GEC-centered strategy imploded, Immelt and GE hoped to replace it with a grand, industrial-related strategy that could drive nearly as much growth as the Welch strategy had. However, such a strategy needed to identify and deliver value propositions centrally important to industrial customers’ long-term success, including by product innovation. GE did focus on a technology for marginally optimizing operational efficiency. Though not the hoped-for sweeping growth strategy, this opportunity could have had value, but unfortunately GE converted it into a grandiose, unrealistic vision it could not deliver.

In 2011, the company correctly foresaw the emerging importance, combined with Big Data Analytics, of what it coined the “industrial internet of things” (IIoT). This technology, GE argued, would revolutionize predictive maintenance––one day eliminating unplanned downtime––saving billions for GE and others. However, no longer a financial powerhouse and now refocusing on its industrial business, GE did not want to be seen as a big-iron dinosaur; it wanted to be cool––it wanted to be a tech company.

So, focused on the IIoT, GE dubbed itself the world’s “first digital industrial company.” It would build the dominant IIoT software platform, on which GE and others would then develop the analytics-applications needed for its predictive-maintenance vision. Immelt told a 2014 GE managers’ meeting, “If you went to bed last night as an industrial company, you’re going to wake up this morning as a software and analytics company.” This initiative was applauded as visionary and revolutionary. HBS’ Michael Porter noted:

“It is a major change, not only in the products, but also in the way the company operates…. This really is going to be a game-changer for GE.”

However, GE’s game-changer proved to be both overreach and strategically incoherent, ultimately changing little in the world. Going back to the Welch era, GE leaders had long believed they could master any business, using GE managerial principles and technique (e.g., Welch’s vaunted systems of professional management and development, the celebrated Six Sigma, its focus on dominant market position in every market, and others). Even though overconfidence had already previously contributed to GE’s financial-business disaster, perhaps we still shouldn’t be surprised by their 2016 stated “drive to become what Mr. Immelt says will be a ‘top 10 software company’ by 2020.” As leadership tells the MIT Sloan Review in 2016:

GE wants to be “unapologetically awesome” at data and analytics… GE executives believe [GE] can follow in Google’s footsteps and become the entrenched, established platform player for the Industrial Internet — and in less than the 10 years it took Google.

The company implausibly declared its intent to lead and control the IIoT/analytics technology, including its complex software and IT. Such control would include application to all industrial equipment owned by GE, its customers, and its competitors. This vision was quixotic, including its inherent conflicts (e.g., competitors and some customers were uninterested in sharing crucial data, or otherwise enabling GE’s control).

GE’s Big Data initiative did offer a value proposition––eliminating unplanned downtime, which sounds valuable––to be achieved via IIoT/analytics. However, how much value could be delivered, relative to its costs and tradeoffs, and the best way to perform such analytics, will obviously vary dramatically among customers. The initiative seemed more rooted in a vision of how great GE would be at IIoT/analytics than in specific value propositions that would be valued by specific segments. It thus betrayed the same internal focus that had plagued the company earlier.

In contrast, Caterpillar––as discussed in Part Three––used IIoT/analytics technology strategically, to better deliver its clear, core value proposition. Developed by deeply studying and living with its customers, Cat’s value proposition focuses on lower total-life equipment cost, for construction and mining customers, provided via superior uptime. Cat also understood, realistically, that it was not trying to become the Google of the industrial internet, but rather would depend heavily on a partner with deep expertise in analytics, not imagining they could do it all themselves. GE’s IIoT initiative was very enthusiastic about the power of data analytics, but it seems plausible that GE never grasped or acted on the importance of in-depth customer-understanding that Cat demonstrated with success.

In addition, the GE strategy did not seem intended to help develop major new product-innovations, as part of delivering winning value propositions. IIoT/analytics technology could possibly enhance continuous improvement in efficiency, cost, and quality, but not help return the company’s focus, as needed, to breakthrough product-innovation. Innovation-consultant Greg Satell noted that GE saved billions in cost by “relentlessly focusing on continuous improvement.” At the same time, he attributes a scarcity in significant inventions at GE since the 1970s to a “lack of exploration.” He writes, “Its storied labs continuously improve its present lines of business, but do not venture out into the unknown, so, not surprisingly, never find anything new.”

Value can be delivered using IIoT/analytics. GE hired hundreds of smart data scientists and other capable people; no doubt GE delivered value for a few customers and could have done more if they had continued investing. Yet, it spent some $4 billion, made some illusory claims to supplying––and controlling––everything in the industrial internet, and ultimately achieved minimal productive result. GE found it could not compete with the major cloud suppliers (e.g., Amazon and Microsoft). More important, effective analytics on the IIoT required vastly greater customization, to specific sectors and individual customers, than GE had assumed––not a single, one-size-fits-all platform. Before giving up in 2018, GE had only convinced 8% of its industrial customers to use its Predix platform.

Part Four: Energy Misfire (2001-2019)

So, after 2001 GE first tried to ride the GEC tiger and was nearly eaten alive in the process. Then it chased the illusion of IIoT dominance, with not much to show for it. Meanwhile, GE was still not addressing its most pressing challenge––its need for new, synergistic strategy for its industrial businesses, to replace the growth engine previously supplied by the finance businesses. Especially important were its core but inadequately understood energy-related markets. In these, GE harbored costly, internally-driven illusions, instead of freshly and creatively identifying potentially winning, energy-related value propositions.

After the financial crisis, the eventual exit from the financial businesses (except for financing specifically to support the sale of GE industrial products) was already very likely. With two-thirds of the remaining company being energy-related, strategy for this sector clearly should have been high priority. However, in its industrial businesses, including energy, GE had long been essentially internally-driven––prone to pursue what it was comfortable doing. (Admittedly, the IIoT venture was an aberration, where GE was not comfortable––for good reason––developing new software, big data analytics, or AI.) Otherwise, however, in the energy related businesses, GE would stay close to what it saw as core competencies, rather than rethinking and deeply exploring what these markets would likely most value in the foreseeable future.

In these markets, GE focused on fuels based on immediate-customers’ preferences, and where GE saw competitive advantage. Although GE developed a wind power business, it largely stayed loyal to fossil fuels. A 2014 interview in Fast Company explains why:

Immelt’s defining achievement at GE looked to be his efforts to move the company in a greener direction [i.e., its PR-marketing campaign, “Ecomagination”].… But GE can only take green so far; this organization fundamentally exists to build or improve the infrastructure for the modern world, whether it runs on fossil fuels or not. Thus, GE has simultaneouslyenjoyed a booming business in equipment for oil and gas fracking and has profited from the strong demand in diesel locomotives thanks to customers needing to haul coal around the country.… In the end, though, it will serve its customers, wherever they do business.

Serving immediate customers can be a good practice. However, it can become counter-productively customer-compelled––if a business ignores unmistakable trends in technology costs, and the preferences of many end-users. With a seemingly safe, internally driven view, GE (in 2014) lost a myopic bet on fossil fuels, acquiring the gas-turbine operations of French company Alstom. GE badly underestimated the rise of renewable energy, resulting in a $23 billion write-down.

 

 

 

 

 

As Part Four will review in some detail, this loss on gas turbines capped GE’s more fundamental, long-term failure, starting in about 2000, to realize its great historic opportunity––creatively leading and capitalizing on the global transition to zero-emission energy, especially renewables. The value proposition that most energy end-users worldwide wanted was increasingly clear after 2000––energy with no tradeoffs in reliable, safe performance, zero (not fewer) emissions, and lower cost.

In this failure in the energy market GE no doubt saw itself as following its customer’s demands. In customer-compelled fashion, GE kept listening to its more immediate customers’ increasing hunger for fossil-fuel based energy generation. The company would have needed a much more customer-focused, science-based perspective to see, and help catalyze, the market’s direction. It needed to study end-users, not just power-generators, and project the increasingly inevitable major reductions in renewable-energy costs, to anticipate the shift away from gas and other fossil-fuels––by many years, not just a few months sooner.

Given its historical knowledge and experience, GE was uniquely well positioned to benefit long-term from the world’s likely multi-trillion-dollar investment in this value proposition by 2050. We can acknowledge that, starting in the early 2000s, GE touted its “Ecomagination” campaign, and built a good-sized wind-power business. However, taking this value proposition seriously––leading a global energy transformation––would have required that GE take a more proactively-creative strategic approach. That would have meant designing comprehensive value-delivery-systems needed to lead, accelerate, and long-term profitably capitalize on that global energy transition.

Fundamentally important, those new value-delivery-systems would have needed to include an aggressive return to the central focus on major product-innovation that built the company in its first century. Many people, in the first decade after 2000, including most energy experts, were highly skeptical that a global transition to zero-emission energy was possible in less than most of the current century. Yet, the basic technology needed for the energy transition in electricity and ground-transportation––a crucial bulk of green-house emissions––had been identified by 2000. Renewable energy, especially solar and wind, bolstered by battery for storage––were already identified. As has largely been confirmed in the last twenty years, that technology––not major new, unimagined breakthroughs––only needed to be massively refined and driven down the experienced-based cost curves.

As will be discussed in Part Four, key parts of solar energy technologies not only dropped in cost faster than anyone expected, but also became unprofitable commodities. As GE and many others learned in the early 2000s, producing solar panels in competition with the Chinese makers became a mostly losing business. However, there are many other elements of renewable energy––wind, storage (e.g., batteries), long-distance transmission, etc. Some of these will inevitably be profitable, even if cranking out commodity solar panels is never among them. The increasingly clear reality since 2000 has been that renewable energy will dominate the global market, and GE should have been playing a leading role.

Other technology, perhaps including hydrogen and some as-yet developed battery technologies, will be needed for some industrial and other sources of emissions not likely replaceable by solar or wind. However, for electricity and ground transport, today we know that these now-proven renewable-energy technologies are lower cost than the fossil-fuel incumbents. In the wide range of product innovations needed to enable this part of the energy transition, GE could and should have been leading the world these past twenty years, rather than just issuing PR lip-service with meagre substance.

Such actions to lead the energy transition, discussed in Part Four, would have included aggressive, imaginative value-delivery-systems by GE, involving not just power generation, but also long-distance transmission, electrification of energy uses (e.g., heating, cooling, transportation, industrial processes), storage––especially battery––technology, and energy-focused financing. Sharing a zero-emissions value proposition could have created great synergy across these GE product-innovations and related businesses. GE could have also used its once-great credibility, to influence public policy, including support for economically rational zero-emission technologies. This excludes asking end-users to accept higher costs or other tradeoffs to save the planet. GE could have proactively, creatively generated great corporate wealth if policy had evolved to allow zero-emission solutions to deliver their potentially superior value.

More strategically fundamental than GE’s inept lost bet after 2013 on natural gas turbines, GE’s historic lost opportunity in its core energy markets was this enormous and inevitable transition to zero-emissions. The same customer-focused, science-based, synergistic strategy, including major product-innovation, that characterized GE’s first century––could have and should have been central to the company since 2000.

Eventual impact of GE’s last four decades––on GE and the world

After 1981, GE achieved a twenty-year meteoric rise behind the Welch strategy, but fundamentally failed to extend the creative, persistent focus on superior value-delivery via product innovation that drove the company in its first century. The cumulative effect of the company’s strategic illusions and inadequate value-delivery was that GE eventually lost control of its destiny, along with most of its market-value and importance. Even after the company’s recent strengthening, its market value is still only about 20% of its 2000 peak after the Welch era, and less than 50% of its 2016 value––its post-financial-crisis peak.

The world also incurred losses, including regrettable influences on business. Rushing to emulate GE, many embraced Six Sigma’s focus on continuous marginal improvement, and an idolatry of maximizing shareholder value. For GE and shareholders, these practices yielded some benefits. However, they also coincided with reduced value delivery, including a sharp decline in major product-innovation. We can see some results of this influence in the world’s lagging innovation and productivity, notwithstanding the often-overrated inventions of the finance and tech sectors. Also lost for the world was the value that GE might have delivered, had it seriously acted on its key opportunity in energy. These losses were the tragedy in GE’s lost control of its destiny, in the four decades after 1981.

Accordingly, after this Introductory Overview, the series continues with four parts:

Part One: Customer-Focused, Science-Based Product Innovation (1878-1980)

Part Two: Unsustainable Triumph of the Welch Strategy (1981-2010)

Part Three: Digital Delusions (2011-2018); and

Part Four: Energy Misfire (2001-2019)

To continue reading this Series on GE, go to Part One.

Footnotes––GE Series Introductory Overview[1] though of course not using our present-day, value-delivery terminology

Like a Compass, Big Data Helps –– If You Know What Direction to Take

Big Data Can Help Execute
But Not Discover New Growth Strategies

By Michael J. Lanning––Copyright © 2017-18 All Rights Reserved

Business investment in “big data analytics” (or “big data”) continues growing. Some of this growth is a bet that big data will not only improve operations, but reveal hidden paths to new breakthrough growth strategies. This bet rests partly on the tech giants’ famous use of big data. We have recently seen the emergence, in part also using big data, of major new consumer businesses, e.g., Uber and Airbnb––discussed in this second post of our data-and-growth-strategy series. Again using big data, some promising growth-strategies have also emerged recently in old industrials, including GE and Caterpillar (discussed in our next post). Many thus see big data as a road map to major new growth.

Typical of comments on the new consumer businesses, Maxwell Wessell’s HBR article How Big Data is Changing Disruptive Innovation, called Uber and others “data-enabled disruptions.” Indefatigable big-data-fan Bernard Marr wrote that Uber and Airbnb were only possible with “big data and algorithms that drive their individual platforms… [or else] Uber wouldn’t be competitive with taxi drivers.” McKinsey, late 2016, wrote, “Data and analytics are changing the basis of competition. [Leaders] launch entirely new business models…[and] the next wave of disruptors [e.g. Uber, Airbnb are] … predicated on data and analytics.” MIT’s SMR Spring 2017 report, Analytics as Source of Business Innovation, cited Uber and Airbnb as “poster children for data-driven innovation.”

Like a compass, big data only helps once you have a map––a growth-strategy. To discover one, first “become the customer”: explore and analyze their experiences; then creatively imagine a superior scenario of experiences, with positives (benefits), costs, and any tradeoffs, that in total can generate major growth. Thus, formulate a “breakthrough value proposition.” Finally, design how to profitably deliver (provide and communicate) it. Data helps execute, not discover, new growth strategies. Did Uber and Airbnb use data to discover, not just execute, their strategies? Let’s see how these discoveries were made.

Uber

Garrett Camp and Travis Kalanick co-founded Uber. Its website claims that, in Paris in late 2008, the two “…had trouble hailing a cab. So, they came up with a simple idea—tap a button, get a ride.” Kalanick later made real contributions to Uber, but the original idea was Garrett Camp’s, as confirmed by Travis’ 2010 blog, and his statement at an early event (quoted by Business Insider) that “Garrett is the guy who invented” the app.

However, the concept did not just pop into Camp’s mind one evening. As described below, the idea’s genesis was in his and friends’ frustrating experiences using taxis. He thought deeply about these, experimented with alternatives, and imagined ideal scenarios––essentially the strategy-discovery methodology mentioned above, and that which we call, “become the customer.” He then recognized, from his own tech background, the seeds of a brilliant solution.

Camp was technically accomplished. He had researched collaborative systems, evolutionary algorithms and information retrieval while earning a Master’s in Software Engineering. By 2008 he was a successful, wealthy entrepreneur, having sold his business, StumbleUpon, for $75M. This “discovery engine” finds and recommends relevant web content for users, using sophisticated algorithms and big data technologies.

As Brad Stone’s piece on Uber in The Guardian recounted earlier this year, the naturally curious Camp, then living in San Francisco, had time to explore and play with new possibilities. He and friends would often go out for dinner and bar hopping, and frequently be frustrated by the long waits for taxis, not to mention the condition of the cars and some drivers’ behavior. Megan McArdle’s 2012 Atlantic piece, Why You Can’t Get a Taxi, captured some of these complaints typical of most large cities:

Why are taxis dirty and uncomfortable and never there when you need them? Why is it that half the time, they don’t show up for those 6 a.m. airport runs? How come they all seem to disappear when you most need them—on New Year’s Eve, or during a rainy rush hour? Women complain about scary drivers. Black men complain about drivers who won’t stop to pick them up.

The maddening experience of not enough taxis at peak times reflected the industry’s strong protection. For decades, regulation limited taxi licenses (“medallions”), constraining competition, and protecting revenue and medallion value. The public complained, but cities’ attempts to increase medallions always met militant resistance; drivers would typically protest at city hall. And if you didn’t like your driver or car, you could leave no tip or complain, but not with any real impact.

Irritated, Camp restlessly searched for ways around these limits. As hailing cabs on the street was unreliable, he tried calling, but that was also problematic. Dispatchers would promise a taxi “In 10 minutes” but it often wouldn’t show; he’d call again but they might lie or not remember him. Next, he started calling and reserving them all, taking the first to arrive, but this stopped working once they blacklisted Camp’s mobile phone.

He then experimented with town-cars (or “black cars”). These were reliable, clean and comfortable, with somewhat more civil drivers, though more expensive. However, as he told Stone, their biggest problem exacerbating their cost was filling dead-time between rides, which to a lesser degree also affected regular taxis. As McArdle wrote, “Drivers turn down [some long-distant fares since] they probably won’t get a return fare, and must instead burn time and gas while the meter’s off” which can wipe out the day’s profit.

Camp could now imagine much better ride-hailing experiences, but how to implement this vision? Could technology somehow balance supply and demand, and improve the whole experience for riders and drivers? At that moment (as he later told Stone) Camp recalled a futuristic James Bond scene he loved and often re-watched, from the 2006 Casino Royale. While driving, Agent  007’s phone shows an icon of his car, on a map, approaching   The Ocean Club, his destination. Camp immediately recognized that such a capability could bring his ride-hailing vision to life.

When iPhone was launched in 2007 it not only included Google Maps, but Camp knew it also had an “accelerometer” which let users know if their car was moving. Thus, the phone could function like a taxi meter, charging for time and distance. And in Summer of 2008, Apple had also just introduced the app store.

Camp knew this meant he could invent a “ride-hailing app” that would deliver benefits––positive experiences. With it, riders and drivers would digitally book a ride, track and see estimated arrival, optimize routes, make payments, and even rate each other. The app would also use driver and rider data to match supply and demand. This match would be refined by dynamic (“surge”) pricing, adjusting in real time as demand fluctuates. Peak demand would be a “tradeoff”: higher price but, finally, reliable supply of rides.

At this point, Camp grew more excited, sensing how big the concept might be, and pressed his friend Travis Kalanick to become CEO (Camp would refocus on StumbleUpon). Despite Kalanick’s famous controversies (ultimately leading to his stepping down as CEO) many of his decisions steered Uber well. Camp’s vision still included fleets of town-cars, which Kalanick saw as unnecessary cost and complexity. Drivers should use their own cars, while Uber would manage the driver-rider interface, not own assets. Analyzing the data, Kalanick also discovered the power of lower pricing: “If Uber is lower-priced, then more people will want it…and can afford it [so] you have more cars on the road… your pickup times are lower, your reliability is better. The lower-cost product ends up being more luxurious than the high-end one.”

With these adjustments, Uber’s strategy was fully developed. Camp and later Kalanick had “become the customer,” exploring and reinventing Uber’s “value propositions” ––the ride-hailing experiences, including benefits, costs and tradeoffs, that Uber would deliver to riders and drivers. These value propositions were emerging as radically different from and clearly superior to the status quo. It was time to execute, which required big data, first to enable the app’s functionalities. Kalanick also saw that Uber must expand rapidly, to beat imitators into new markets; analytics helped identify the characteristics of likely drivers and riders, and cities where Uber success was likely. Uber became a “big data company,” with analytics central to its operations. It is still not profitable today, and faces regulatory threats; so, its future success may increasingly depend on innovations enabled by data.  Nonetheless, for today at least, Uber is valued at $69B.

Yet, to emulate Uber’s success, remember that its winning strategy was discovered not by big data, but by becoming its users. Uber studied and creatively reinvented key experiences––thus, a radically new value proposition––and designed an optimal way to deliver them. Now let’s turn to our second, major new consumer business, frequently attributed to big data.
Airbnb’s launch was relatively whimsical and sudden. It was not the result of frustration with an existing industry, as with ride-hailing. Rather, the founders stumbled onto a new concept that was interesting, and appealing to them, but not yet ready to fly. Their limited early success may have helped them stay open to evolving their strategy.

They embarked on an extended journey to “become” their users, both hosts and guests; they would explore, deeply rethink, and reinvent Airbnb’s key customer-experiences and its value proposition. Big data would become important to Airbnb’s execution, but the evolutionary discovery of its strategy was driven by capturing deep insight into customer experiences. Providing and communicating its value proposition, Airbnb outpaced imitators and other competitors, and is valued today at over $30B. Like Uber, Airbnb is a great example of the approach and concepts we call “value delivery,” as discussed and defined in our overview, Developing Long-Term Growth Strategy.

In 2007, Joe Gebbia and Brian Cheskey, both twenty-seven and friends from Design school, shared a San Francisco apartment. They hoped for entrepreneurial opportunities, but needed cash when their rent suddenly increased. A four-day design conference was coming to town, and most hotels were booked. Gebbia suggested “turning our place into ‘designers bed and breakfast’–offering…place to crash…wireless internet…sleeping mat, and breakfast. Ha!” Next day, they threw together a web site, airbedandbreakfast.com. Surprisingly, they got three takers; all left happy, paying $80 per night (covering the rent). They all also felt rewarded by getting to hear each other’s stories; the guests even offered advice on the new business. Cheskey and Gebbia gave each other a look of, “Hmmm…” and a new business had been born.

The concept seemed compatible with the founders’ values; as The Telegraph later wrote, “Both wanted to be entrepreneurs, but [not] ‘create more stuff that ends up in landfill.’ …a website based on renting something that was already in existence was perfect…”

However, they first underestimated and may have misunderstood their concept. An early headline on the site read, “Finally, an alternative to expensive hotels.” Brian Chesky says:

We thought, surely you would never stay in a home because you wanted to…only because it was cheaper. But that was such a wrong assumption. People love homes. That’s why they live in them. If we wanted to live in hotels, more homes would be designed like hotels.

They were soon joined by a third friend, engineer Nathan Blecharczyk, who says that this mix of design and engineering perspectives [view 01:19-01:40 in interview]:

…was pretty unusual and I actually attribute a lot of our success to that combination. We see things differently. Sometimes it takes a while to reconcile those different perspectives but we find when we take the time to do that we can come up with a superior solution, one that takes into account both points of view.

Soon after the initial launch, they used the site frequently, staying with hosts, and gathering insights into experiences. Two key early discoveries were: 1) Payments, then settled in cash from guest to host, created awkward “So, where’s my money?” moments, and should be handled instead with credit card, through the Airbnb site; and 2) while they originally assumed a focus on events, when hotels are over-booked and expensive, users surprised them by asking about other travel, so they realized that they had landed in the global travel business. A 2008 headline read, “Stay with a local when traveling.”

In August of 2008, the team thought they had made a breakthrough. Obama would accept the Democratic nomination before 100,000 people in Denver, which only had 30,000 hotel rooms. So, Airbnb timed its second launch for the convention. Sure enough, they saw a huge booking spike…but it promptly dropped back to near-zero, days later.

Searching for a promotional gift for hosts, they pulled off a scrappy, startup stunt. They designed and hand-assembled 500 boxes of cereal, with covers they convinced a printer to supply for a share of sales: Obama Os (“Hope in every bowl”) and Cap’n McCain’s (“Maverick in each bite”). CNN ran a story on it, helping sell some at $40 per box, for over $30K total––enough to survive a few more months.

But mid-2009, they were still stalled, and about to give up. About fifteen investors all ignored Airbnb, or saw nothing in it. Then Paul Graham, a founder of “Y Combinator” (YC, a highly-regarded, exclusive start-up-accelerator), granted Airbnb his standard five-minute interview, which he spent telling them to find a better idea (“Does anyone actually stay in one of these? …What’s wrong with them?”) But on the way out the door, thinking all was lost anyway, Chesky handed a box of Obama Os to Graham, who asked, “What’s this?” When told, Graham loved this story of scrappy, resourceful unwillingness to die. If they could sell this cereal, maybe they can get people to rent their homes to strangers, too.

Joining YC led to some modest investments, keeping Airbnb alive. Still, weekly revenues were stuck at $200. Paul pushed them to analyze all their then-forty listings in their then-best market, New York. Poring over them, Gebbia says, they made a key discovery:

We noticed a pattern…similarity between all these 40 listings…the photos sucked…People were using their camera phones [of poor quality in 2009] or…their images from classified sites. It actually wasn’t a surprise that people weren’t booking rooms because you couldn’t even really see what it is that you were paying for.

Paul urged them to go to New York immediately, spend lots of time with the hosts, and upgrade all the amateur photography. They hesitated, fearing pro photography was too costly to “scale” (i.e., to use large-scale). Paul told them to ignore that (“do things that don’t scale”). They took that as license to simply discover what a great experience would be for hosts, and only worry later about scale economics. A week later, results of the test came in, showing far better response, doubling total weekly revenue to $400. They got it.

The team reached all 40 New York hosts, selling them on the (free) photos, but also building relationships. As Nathan Blecharczyk explains [view 18:50-21:27 in his talk], they could follow up with suggestions that they could not have made before, e.g., enhancements to wording in listings or, with overpriced listings, “start lower, then increase if you get overbooked.”

Of course, hi-res photography is common on websites today, even craigslist, and seems obvious now, as perhaps it is to think carefully about wording and pricing in listings. However, these changes made a crucial difference in delivering Airbnb’s value proposition, especially in helping hosts romance and successfully rent their home. This time-consuming effort greatly increased success for these hosts. After that, “people all over the world started booking these places in NY.” The word spread; they had set a high standard, and many other hosts successfully emulated this model.

To even more deeply understand user experiences, the team used “becoming the customer,” or what Gebbia calls, “being the patient,” shaped by his design-thinking background.

[As students, when] working on a medical device we would go out [and] talk with…users of that product, doctors, nurses, patients and then we would have that epiphany moment where we would lay down in the bed in the hospital. We’d have the device applied to us…[we’d] sit there and feel exactly what it felt like to be the patient…that moment where you start to go aha, that’s really uncomfortable. There’s probably a better way to do this.

As Gebbia explained, “being the patient” is still an integral piece of Airbnb’s culture:

Everybody takes a trip in their first or second week [to visit customers, document and] share back to the entire company. It’s incredibly important that everyone in the company knows that we believe in this so much…

They gradually discovered that hosts were willing to rent larger spaces, from air beds, to rooms, entire apartments, and houses. They also further expanded Airbnb’s role, such as hosting reviews and providing a platform for host/guest communications.“Becoming the customer,” they discovered a “personal” dimension of the experience; in 2011, Gebbia recounted [view 19:41-20:36] being a guest in “an awesome West Village apartment,” owned by a woman named Katherine (away then), and he was greeted with:

…a very personalized welcome letter…a Metro Card for me, and [menus from] Katherine’s favorite take-out places…I just felt instantly like, ‘I live here!’ [And on the street] felt like I have an apartment here, I’m like a New Yorker! So, it’s this social connection…to the person and their spaces; it’s about real places and real people in Airbnb. This is what we never anticipated but this has been the special sauce behind Airbnb.

This idea of personal connection may have helped address Airbnb’s crucial problem of trust (“who is this host, or guest?”). Again, they thought deeply about the problem, both redesigning experiences and applying digital solutions. One was “Airbnb Social Connections,” launched in 2011. As TechCrunch wrote, a prospective guest can:

…hook up the service to your social graph via Facebook Connect. Click one button, opt-in, and [in] listings for cities around the world you’ll now see an avatar if a Facebook friend of yours is friends with the host or has reviewed the host. It’s absolutely brilliant.” [Cheskey said it also helps guests and hosts] “have something in common…and helps you find places to stay with mutual friends, people from your school or university…

To further build trust, users were required to “verify, meaning share their email…online and offline identify.” Hosts were asked “to include large photos of themselves on their profiles.” Hosts and guests were urged to communicate before each stay.

Next, the team searched for yet more dimensions of users’ positive experiences. As Leigh Gallagher wrote in Fortune, the team (in 2012) pondered, “Why does Airbnb exist? What’s it purpose?” The global head of community interviewed 480 employees, guests, and hosts, and they found that guests don’t want to be “tourists” but to engage with people and culture, to be “insiders.” The idea of “belonging” emerged, and a new Airbnb mission: “to make people around the world feel like they could ‘belong anywhere.’” Cheskey explains, “cities used to be villages. But…that personal feeling was replaced by ‘mass-produced and impersonal travel experiences,’ and along the way, ‘people stopped trusting each other.’”

They adopted the “Bélo” (as at right in the expanded icon)
to echo this idea. Some mocked all this. TechCrunch
declared it a “hippy-dippy concept”; others suggested that users just wanted a “cheap and cool place to stay,” not “warm and fuzzy” feelings. But Gallagher argues that “belonging” can be more substantive than, “having tea and cookies with [your host]”:

It was much broader: It meant venturing into neighborhoods that you might not otherwise be able to see, staying in places you wouldn’t normally be able to, bunking in someone else’s space, [experience] “hosted” for you, regardless of whether you ever laid eyes on him or her.

“Belonging” may have been a little warm and fuzzy, or even hokey, but Gallagher cites evidence that the idea seemed to resonate with many users. In late 2012, wanting to build further on this notion and having read an article in Cornell Hospitality Quarterly, Cheskey began thinking that Airbnb should focus more on “hospitality.” He read a book by Chip Conley, founder of a successful boutique-hotel chain. Conley wanted guests to check out, after three days, as a “better version of themselves”; he talked of democratizing hospitality, which had become “corporatized.” Airbnb hired him.

Conley gave talks to Airbnb hosts and employees worldwide, and established a “centralized hospitality-education effort, created a set of standards, and started a blog, a newsletter, and an online community center where hosts could learn and share best practices.” He also started a mentoring program in which experienced hosts could teach new ones about good hospitality. Examples of the new standards included:

  • Before accepting guests, try to make sure their idea for their trip matches your “hosting style”; [e.g., if they want] a hands-on host and you’re private, it may not be the best match.
  • Communicate often; provide detailed directions. Establish any “house rules” clearly.
  • …beyond basics? [placing] fresh flowers or providing a treat upon check-in, like a glass of wine or a welcome basket. Do these things…even if you’re not present during the stay.

Airbnb took longer than Uber to discover their strategy, but they got there, again by climbing into the skin of users, “becoming the customer” (or “being” the patient”), living their users’ experiences. Like Uber, they then used big data extensively, to help execute. For example, building on the early New York experiments to help hosts set optimal prices, Airbnb used big data to create, as Forbes wrote, “Pricing Tips.” This “constantly updating guide tells hosts, for each day of the year, how likely it is for them to get a booking at the price they’ve currently chosen.” Airbnb’s machine-learning package further helps hosts quantitatively understand factors affecting pricing in their market.

In combination with the above initiatives to strengthen trust, personal connection, and even that sense of “belonging anywhere,” big data helped Airbnb continue to improve the value proposition it delivered. Its great strategic insights into user experiences, and its superb execution, allowed Airbnb to outdistance its imitative competitors.

So, both Uber and Airbnb made great use of big data. Yet, for both these game-changing businesses, to “become the customer” (or “being the patient”) was key to discovering the insights underlying their brilliant, breakthrough-growth strategies.

*   *   *

This post follows our first, Who Needs Value Propositions, in this data/strategy series.  Our next post (#3–– Big Data as a Compass––Industrial/B2B Businesses) looks at the industrial examples mentioned earlier (GE and Caterpillar), to compare their lessons with those in the above consumer cases. We then plan three more posts in this series:

  • #4: Dearth of Growth––Why Most Businesses Must Get Better at Discovering & Developing New Growth Strategies
  • #5: No Need to Know Why? Historical Claims (“Correlation is all we need, Causality is obsolete”) Propagated Misplaced Faith in Big Data as fount of new growth strategies
  • #6: Powerful for execution, big data is also prone to chronic, dangerous errors

Who Needs Value Propositions When We Have Big Data?

Asking too much from Big Data Analytics, Digital Marketers Confuse Growth Strategy with Marketing Efficiency and Short-Term Clicks

Posted June 14, 2017

In the past decade, “big data analytics” (or “big data”) has grown dramatically in business. Big data can help develop and execute strategy—as in Google and Facebook’s ad businesses, and Amazon and Netflix’s recommendation engines. Seeing these huge tech successes, many decided to just emulate how big data is used, hoping that big data analytics alone can drive development of long-term growth strategy. This belief is a delusion.

Winning strategies require that businesses discover new or improved experiences that could be most valued (though unarticulated) by customers, and redesign their businesses to profitably deliver these experiences. Big data can increase communication efficiency and short-term sales, or “clicks”, but changing the most crucial customer experiences can transform behaviors, attitudes, and loyalty, leading to major growth. Such insight is best found in many businesses by in-depth exploration and analysis of individual customers—and cannot be found in the correlations of big data. Some questions are easiest answered with big data, but availability of data should not drive what questions to ask. Data-driven priorities can obscure fundamental strategic questions, e.g. what could customers gain by doing business with us—what value proposition should we deliver, and how?

Discovering such insights requires deeply understanding customers’ usage of relevant products or services. In some businesses, such as online retailers, customers’ buying-experiences constitute usage of the service, so these businesses do have usage data, and can use big data in developing strategy. For most, such as product producers, however, usage happens only after purchase, so they have purchase but not usage data, and cannot properly use big data to develop strategy. Feeling compelled to use big data, such businesses may use it anyway, on the data they have, which can help achieve short-term sales, but not to develop long-term growth strategy. However, these businesses still can— and must—develop insights into what usage experiences to focus on changing, and how.

Digital marketing now plays a major role in developing business strategy, and heavily uses big data. Big data predictive algorithms analyze customers’ past transactions and purchase or shopping behaviors, to increase the efficiency of matching customers with marketing offers, and strengthen short-term sales. Sustained major growth requires more than ratcheting reach-efficiency and tweaking the week-end promotional tally. Sustained growth requires creative exploration of customers’ current experiences, to discover breakthrough value propositions, and design ways to profitably provide and communicate them. This post and follow-ups discuss these concerns and suggest solutions.

Predicting transactions is not strategy

As illustration, a Customer Experience Management (CEM) system by Sitecore helps fictional apparel maker “Azure” (Sitecore’s name) use big data to customize marketing to individual customers. Here, Azure intervenes with consumer “Kim” on her decision journey. When she visits their site anonymously, the data shows her matching their active-mother profile. Clicking on a shoes ad, she signs up for email alerts, providing her name and email. Azure begins building her profile. They email a belts promotion to customers predicted by the data as potentially interested—Kim buys one. Later, real-time location data shows Kim near an Azure store, so CEM texts an in-store discount on a new boots line; Azure is confident she’ll like it based on her past actions. Scanning the coupon on Kim’s phone, the CEM enables the clerk to offer Kim another product, a child’s backpack, based on Kim’s profile. Kim is impressed—Azure understands her interests, tracking her every action. She joins Azure’s loyalty program, giving her sneak peeks at coming products. With data showing that Kim most often accesses the site by smart phone, Azure offers her their new mobile app. Via big data, Azure has improved the shopping and buying experiences, and efficiently stimulated short-term sales.

In applications of big data for marketing and growth-strategy, data scientists search for previously unknown correlations among customer transactional and behavioral data. For growth strategy, however, more understanding and creative thought is needed about why customers do what they do, what the consequential experiences have been, what is imperfect in these experiences, and how the business might cause these new or different experiences. These are typically unarticulated opportunities for improved customer experiences. Identifying them requires skilled observation and creative interpretation of current experiences—not replicable in most businesses by data collection and analytics. Such analysis, including customers’ product-usage behaviors, not just purchase, is crucial to developing value propositions that can generate major new growth.

Urging us to “Use big data to create value for customers, not just target them,” Niraj Dawar said in HBR that big data holds out big promises for marketers, including answers to “who buys what, when?” Marketers “trained their big data telescopes at a single point: predicting each customer’s next transaction,” in detailed portraits of consumers’ media preferences, shopping habits, and interests, revealing her next move.

In the Azure narrative, Azure is “pretty confident” of what Kim will want, where and when, based on understanding her interests and interactions. In addition to targeting, big data allows “personalizing”—using our knowledge and references to customers’ past purchases and interests, to make our marketing more relevant and thus more effective in winning that next short-term promotional sale. This saga, of Kim’s “well-guided shopping journey” with Azure, leaves Kim happy (though not entirely of her own free will). In this way, it is reminiscent of Minority Report’s mall scene. The novel and 2002 film focused on prediction (“precognition”) of crimes not yet committed (supernaturally foreseen by “PreCogs”). We can hope this premonition is only a dystopic nightmare, but marketers may find the film’s futuristic marketing a utopian dream. The marketing is precisely targeted and highly personalized—ads and holographic greeter automatically recognize and call out the character’s name, reminding him of a recent purchase.

Fifteen years ago, the sci-fi film’s marketing technology was showing us the future—ever increasingly accurate predictions of each customer’s next purchase. Big data is thus a kind of commercial precognition. Data scientists are PreCogs using big data, not supernatural powers. Both narratives are fictional, but illustrate the big data logic for marketing and growth-strategy. Able to predict the customer’s next transaction, the CEM produces targeted marketing, more efficient in customer-reach. Personalized marketing is more relevant, helping it stimulate short-term sales. A fundamental problem with this paradigm is that growth strategy needs more than accurate predictions of transactions. Such strategy must transform behaviors, attitudes and loyalty of customers and other players in the chain, based on insights about the causality underlying correlations.

Summary: Strategy is More than Prediction

Marketers are right to have yearned for much more factual data on what customers do, i.e. their behaviors. However, with big data it has been easy and commonplace to overemphasize customers’ behavior, especially as related to their buying process, without adequately understanding and analyzing the rest of their relevant experience. Businesses must understand customers’ usage experience, not just buying. They must also explore what’s imperfect about this experience, how it could be improved for the customer, what value proposition the business should deliver to them, and how. Such exploration must discover the most powerful, unarticulated customer-opportunities for greater value delivery, and redesign the business to profitably realize such opportunities. These traits are essential to how strategy is different from prediction—strategy must focus on what we want to make happen and how, not just what we might bet will happen.

Kim’s past transactional behavior is analyzed to predict what she’ll likely want next, but needs to be pushed further, to discover experiences and value propositions that could influence her, and yield long-term growth. (See a similar complaint about limitations of data, from Clayton Christensen et al.) Actions—including product and service improvements, and intense focus of marketing communications on customer benefits—must then be designed to optimally deliver these value propositions.

Growth of big data in tandem with digital marketing

IDC estimates global revenue for business data analytics will exceed $200B by 2020. As a recent review said, this expansion was enabled by several trends: continued rapid expansion of data, doubling every three years, from online digital platforms, mobile devices, and wireless sensors; huge capacity increases and cost reductions in data storage; and major advances in analytic capabilities including computing power and the evolution of sophisticated algorithms. Previously, “industry poured billions into factories and equipment; now the new leaders invest…in digital platforms, data, and analytical talent.” This investment expands the ability to predict the next short-term transaction, increase marketing-communications efficiency and promotional impact. It also drains resources needed for the more difficult but, as argued here, more strategically crucial exploration of customers’ usage experiences, and discovery of breakthrough-growth value propositions.

Using digital technology to market products and services, the digital marketing function has risen rapidly. Last year for the first time, US digital ad-spending surpassed TV, the traditional dominant giant. And digital marketing, both the source of most big data and the easiest place to apply it, increasingly leads development of business strategy.

Efficiency and relevance: important but usually much less so than effectiveness

More efficient marketing is desirable, but only if it’s effective, which is often taken for granted in the focus on efficiency. Much digital marketing faith is put in the four-part promise of “the right ad or offer, to the right person, at the right time, via the right place” (see here, and here). Most big data focus on the last three, which mostly enhance efficiency, instead of the “right ad” which determines effectiveness.

Hunger for efficiency also drives the focus on targeting. Personalizing, when possible and affordable, can also make customers more willing to hear the message, increasing efficiency—and possibly effectiveness—by its greater relevance.

However, effectiveness is the far more crucial issue. If a message does not credibly persuade customers, it is still of little use to the business, even if “efficient.” But targeting and personalizing marketing typically do not identify what behavioral attitudes to change, or how to change them. This more fundamental strategic goal requires deeper understanding of the unarticulated value to customers of improved experiences, and detailed creative exploration of the business’ potential to profitably cause these improvements.

Reinforcing the predominant near-term and efficiency focus of big data in digital marketing is the nature of online sources typically available for big data. McKinsey estimated that, “so much data comes from short-term behavior, such as signing up for brand-related news and promotions on a smartphone or buying a product on sale. That short-term effect typically comprises 10 to 20 percent of total sales, while the brand…accounts for the rest.” This short-term nature of the readily available data biases marketers to focus on short-term transactional results.

Location-based, real-time big data—another advance in short-term promotion

It seems worth noting here that location-based marketing excites the digital marketing world, seeing the “next big thing.” Below are examples, from Skyy Vodka and Starbucks:

 

 

 

 

 

 

As location data gets more accurate (problematic today) this approach will again improve promotional efficiency. In one illustrative test recounted in Ad Age, Brown-Forman, suppliers of Herradura tequila, teamed with Foursquare (a search-and-discovery mobile app that helps find “perfect places [food, entertainment, etc.]”). Foresquare used Brown-Forman’s list of accounts where Herradura is sold, to target mobile and other Herradura ads to consumers whose mobile was close (or had been in) shops, bars, or restaurants in the account list. They saw 23% increased visits to accounts, a positive signal.

Since big data was applied early by direct marketing companies, big data today (further illustrated above by advances in location-based marketing) works more like direct-response marketing than demand-generation. The problem, as noted earlier, is that businesses more than ever also need the latter—demand-generating activity, creating loyalty, thus behavioral changes resulting in long-term growth. Some businesses don’t need these luxuries, when cheap, automated big-data options—digital PreCogs—proliferate.

But most businesses do need to make these serious strategic investments, in addition to and complementary with big data analytics. Having digitally captured countless petabytes of data describing Kim’s every action of shopping and buying, the business managers now need to spend time with Kim learning about her usage of that apparel. What were her experiences before and during usage of those shoes, the belt, and other items? And what of her daughter’s experiences with the backpack? What was imperfect, what could some better experiences be, what would be an improved superior value proposition, and what would it take to provide and communicate that proposition effectively and profitably? These intensively customer-focused actions can enable the discovery and activation of powerful insights for profitably influencing customers’ (and others’) behavior, a key basis for generating profitable major growth over time.

*   *   *

As mentioned above, this blog series will expand on these concerns about the way that big data analytics has evolved for use in growth strategy, including digital marketing; and will expand on the above recommended solutions for marketers and businesses, including how these solutions apply to most businesses.

Value Delivery Blog

 

 4 Ways to Improve Delivering Profitable Value

Posted 4/13/15

Make Value Propositions the Customer-Focused Linchpin to Business Strategy

We suggest that Businesses should be understood and managed as integrated systems, focused single-mindedly on one thing – profitably delivering superior Value Propositions, what we call delivering profitable value (DPV). But most are not. Some readers may assume this discussion can only be a rehash of the obvious – surely everyone ‘knows’ what Value Propositions (VPs) are, and why they matter. But we suggest that most applications of the VP concept actually reflect fundamentally poor understanding of it, and fail to get the most out of it for developing winning strategies. In this post I’ll summarize 4 ways to improve on delivering profitable value, using the VP concept far more effectively – as the customer-focused linchpin to your strategy.

Delivering Profitable Value – Let’s first recap the key components of this approach:

Real & Complete Value Proposition – A key element of strategy; internal doc (not given to customers); as quantified as possible; makes 5 choices (discussed in depth here):

  • Target customers (or other entities) for this VP?
  • Relevant timeframe in which we will deliver this VP?
  • What we want customers to do (e.g. buy/use and/or other behaviors/changes?)
  • Their competing alternatives (competitors, status quo, new technologies, etc.)?
  • Resulting experiences they will get & we deliver? Not a list of products & performance attributes, but the core of a VP; discussed here and further here, they are:
    • Specific, measurable events/processes – outcomes – in customer’s life/business, that result from doing as we propose (e.g., buy/use products/services, etc.)
    • Includes price and tradeoffs (inferior or equal experiences)
    • All as compared to competing alternatives

Deliver the chosen VP – A real VP identifies what experiences to deliver, not how; so manage each business as a Value Delivery System with 3 integrated high-level functions:

  • Choose the VP (discover/articulate a superior VP focused on resulting experiences)
  • Provide it (enable the VP/experiences to happen via product/service/attributes, etc.)
  • Communicate it (ensure customers understand/believe it via Marketing, Sales, etc.)

Profitable Value? – If customers conclude that a VP is superior to the alternatives, it generates revenues; if the cost of delivering it is less than those revenues, then the business creates profit (or shareholder wealth) – thus, it is delivering profitable value.

*  *  *  *  *

4 areas where many businesses can improve on delivering profitable value:

  1. Avoid misunderstood, confused, and trivial definitions of ‘Value Proposition’
  2. Deliberately deliver the VP – rigorously define, link & manage each function to help Provide and/or Communicate the resulting experiences
  3. Think profitable value-delivery across the entire chain, not just the next link
  4. Discover new value-delivery insights by primarily exploring and analyzing what customers actually do, more than what they think and say they want

Now let’s consider each of these 4 areas in more detail:

  1. Avoid commonly misunderstood, confused or even trivialized definitions of a VP

The table below summarizes some common misperceptions about Value Propositions, followed by some discussion of the first two.

R C VP

Of these misperceptions, the first two are perhaps most fundamental, being either:

  • Our message – an external document to directly communicate with customers, explaining why they should buy our offering; part of execution, not strategy

OR:

  • Part of strategy, but focused primarily on us (not customers) – our products/services, performance-attributes, functional skills, qualities, etc., not customers’ experiences

It’s not your Elevator Speech – a VP is strategy, not execution – For much greater strategic clarity and cross-functional alignment, avoid the common misunderstanding that confuses and equates a VP with messaging. Execution, including messaging is of course important. A VP, as part of your strategy, should obviously drive execution, including messaging; but strategy and execution are not the same thing. If you only articulate a message, without the guiding framework of a strategy, you may get a (partial) execution – the communication element – of some unidentified strategy. But you forgot to develop and articulate the strategy! The execution might still work, but based more on luck than an insightful, fact-based understanding of customers and your market.

This common reduction in the meaning of a VP, to just messaging, not only confuses execution with strategy, but also only addresses one of the two fundamental elements of execution. That is, a VP must be not only Communicated, but also Provided – made to actually happen, such as via your products/services, etc. If customers buy into your messaging – the communication of your VP – but your business does not actually Provide that VP, customers might notice (at least eventually). Though some businesses actually attempt this approach – promising great things, but not making good – and may even get away with it for a limited time, a sustainable business obviously requires not only promising (Communicating) but actually Providing what’s promised.

And it’s not about us – focus VPs on customers, not our products, services, etc. – The other common misuse of the VP concept starts by treating it (rightly) as a strategic decision and internal document. But then (again missing the point) such a so-called VP is focused primarily on us, our internal functions and assets, rather than on the customer and resulting experiences we will deliver to them.

Here it’s helpful to recall the aphorism quoted by the great Marketing professor Ted Levitt, that people “don’t want quarter-inch drill bits, they want quarter-inch holes.” A real VP is focused on detailed description of the ‘hole’ – the resulting experiences due to using the drill bit – not on description of the drill bit. Of course, the drill bit is very important to delivering the VP, since the customer must use the drill bit, which must have the optimal features and performance attributes, to get the desired quarter-inch hole. But first, define and characterize the VP, in adequately detailed, measurable terms; then separately determine the drill-bit characteristics that will enable the desired hole.

  1. Deliberately deliver the VP – rigorously define, link and manage what each function must do to help Provide and/or Communicate the resulting experiences

It’s impossible to link Providing and Communicating value without a defined VP. However, even with a chosen VP, it is vital to explicitly link its resulting experiences, to the requirements for Providing it (e.g., product and service) and for Communicating it. Companies can improve market share and profitability by rigorously defining the VP(s) they aspire to deliver and then rigorously linking to the Providing and Communicating processes. Failure to make the right links leads to a good idea, not well implemented. (See more discussion of this Value Delivery framework here.)

 VDS (w)

  1. Think value-delivery across the entire chain, not just the next link

Most businesses are in a value delivery chain – simple, or long and complex, e.g.:

Many companies survey their immediate, direct customer, asking them what they most value. Less often, they may ask what that customer thinks is wanted by players down the chain. They may rely for insight on that next customer, who may or may not be knowledgeable about others in the chain. There is often great value in exploring entities further down, to understand how each link in the chain can provide value to other links, including final customers, often with important implications for the manufacturer, among others. (See more discussion of Value Delivery Chains here.)

VDC

  1. For Value Proposition insights, explore and analyze what customers actually do, not what they think they want

Businesses often conduct research, essentially asking customers, in various forms, to specify needs. A limitation of such research is that customers often filter their answers by what they believe are the supplier’s capabilities. We believe a better way is to deeply study what entities (at several links in the chain) actually do. First capture a virtual “Video One” documenting current experiences, including how an entity uses relevant products/services. Then create “Video Two,” an improved scenario enabled by potential changes by the business in benefits provided and/or price, but which somehow deliver more value to the entity than in Video One. Then construct a third virtual video capturing competing alternatives. Finally, extract a new, superior VP implied by this exploration. Results come much closer to a winning VP than asking customers what they want. (See here for more discussion of creatively exploring the market using this methodology.)

BTC

Shaping Your View of B2B Value Props

Posted 3/31/15

In his recent interview of Michael Lanning (shared in the previous post of this Blog), Brian Carroll asked Mike about the role of Value Props in B2B strategy; Brian wrote about this conversation in the B2B Lead Blog.

Real Value Props – Direct from the Source

Posted 3/31/15

Recently Michael Lanning was interviewed by Brian Carroll, Executive Director, Revenue Optimization, MECLABS Institute, for the MECLABS MarketingExperiments Blog. Brian asked Mike, as the original creator of the Value Proposition concept in 1983 while a consultant for McKinsey & Company, about the evolution of this concept over the past three decades. Their discussion, on Real Value Props, Direct from the Source,’ is here.

Welcome to: Value Delivery Blog

Updated 3/30/21

This Blog aims to help illuminate what makes for winning and losing business strategies, discussing what we hope are interesting and relevant examples. In this Blog we will especially view these topics through the lens that we––the DPV Group, LLC––call ‘Delivering Profitable Value (DPV)’ which contends that strategy should focus single-mindedly on profitable, superior ‘Value Delivery.’ That is, sustainable business success, while elusive and subject to good and ill fortune, can most realistically be achieved by a concentrated, deliberate effort to creatively discover and then profitably Deliver (i.e. Provide and Communicate) one or more superior Value Propositions to customers.

While many would perhaps say this idea is ‘obvious,’ most businesses do not actually develop or execute their strategies based on this understanding of the central role of the Value Proposition. A Value Proposition should not be a slogan or marketing device for positioning products/services, but rather the driving, central choice of strategy – fundamental to success or failure of the business. In this Blog we will try to bring that idea further to life for a variety of industries and markets.

As background, Michael Lanning first created and articulated/coined some of these concepts, including ‘Value Proposition,’ the ‘Value Delivery System’ and the related concept of ‘Value Delivery.’ He did so (after his first career, in Brand Management for Procter & Gamble) while a consultant for McKinsey & Company, working closely with Partner Ed Michaels in Atlanta in the early 1980s.

In life after McKinsey, he and his colleagues in the DPV Group built on those seminal concepts. Lanning did so first with then-professor at Stanford and Berkeley Business Schools, Prof Lynn Phillips, and later with his DPV Group colleagues. These include long-time senior P&G-manager Helmut Meixner, and previous McKinsey consultant and long-term executive in the paper industry Jim Tyrone, and others. The expanded concepts of Value Delivery include the Customer’s Resulting Experience and the notion of organizations that, rather than Value-Delivery-Focused, can be understood as ‘Internally-Driven’ and/or ‘Customer-Compelled.’

Today, the DPV Group tries to help clients appreciate, internalize, and apply these Value Delivery strategy concepts, to pursue profitable, breakthrough growth in a wide range of B2B and B2C businesses. We hope that this Blog will also contribute to that goal.

Copyright © 2024 DPV Group | Contact: Email us at contact@dpvgroup.com, or phone us at (678) 427-1986.