Value Delivery Blog - DPV Group

DPV Group

Strategy consulting & executive-education


Value Delivery Blog

Part One: Customer-Focused, Science-Based and Synergistic Product-Innovation

Illusions of Destiny-Controlled & the World’s Real Losses––

Value-Delivery in the Rise-and-Decline of General Electric

By Michael J. Lanning––Copyright © 2021 All Rights Reserved

This is Part One of our four-part series analyzing the role of strategy in GE’s extraordinary rise and dramatic decline. Our earlier post provides an Introductory Overview to this series.

We contend that as the world recovers from Covid-19, businesses more than ever need strategies focused on profitably delivering superior value to customers. Such strategy requires product (or service) innovation that: 1) is customer-focused; 2) often is science-based; and 3) in a multi-business firm, shares powerful synergies.

As this Part One discusses, GE in its first century––1878-1980––applied these three strategic principles, producing the great product-innovations that enabled the company’s vast, profitable businesses.

After 1981, however, as discussed later in Parts Two-Four, GE would largely abandon those three principles, eventually leading the company to falter badly. Thus, Part One focuses on GE’s first-century and its use of those three strategic principles because they are the key both to the company’s rise and––by neglect––its decline.

Note from the author: special thanks to colleagues Helmut Meixner and Jim Tyrone for tremendously valuable help on this series.

Customer-focused product-innovation––in GE’s value delivery systems

In GE’s first century, its approach to product-innovation was fundamentally customer-focused. Each business was comprehensively integrated around profitably delivering winning value propositions––superior sets of customer-experiences, including costs. We term a business managed in this way a “value delivery system,” apt description for most GE businesses and associated innovations in that era. Two key examples were the systems developed and expanded by GE, first for electric lighting-and-power (1878-1913), and then––more slowly but also highly innovative––for medical imaging (1913 and 1970-80).

GE’s customer-focused product-innovation in electric lighting-and-power (1878-1913)

From 1878 to 1882, Thomas Edison and team developed the electric lighting-and-power system, GE’s first and probably most important innovation. Their goal was large-scale commercial success––not just an invention––by replacing the established indoor gas-lighting system.[1] That gas system delivered an acceptable value proposition––a combination of indoor-lighting experiences and cost that users found satisfactory. To succeed, Edison’s electric system needed to profitably deliver a new value proposition that those users would find clearly superior to the gas system.

Therefore, in 1878-80, Edison’s team––with especially important contributions by the mathematician/physicist Francis Upton––closely studied that gas system, roughly modeling their new electric lighting-and-power system on it. Replacing gas-plants would be “central station” electricity generators; just as pipes distributed gas under the streets, electrical mains would carry current; replacing gas meters would be electricity meters; and replacing indoor gas-lamps would be electric, “incandescent” lamps. However, rather than developing a collection of independent electric products, Edison and Upton envisioned a single system, its components working together to deliver superior indoor-lighting experiences, including its usage and total cost. It was GE’s first value-delivery-system.

Since 1802 when Humphry Davy had first demonstrated the phenomenon of incandescence––a wire heated by electricity could produce light––researchers had tried to develop a working incandescent lamp, to enable indoor electric lighting.Typically enclosed in a glass bulb, two conductors at the lamp’s base ––connected to a power source––were attached to a filament (‘B’ in Edison’s 1880 patent drawing at right). As current flowed, the filament provided resistance; the electricity’s energy overcame the resistance, creating heat and incandescence. To prevent the filament’s combustion, the lamp was a (partial) vacuum.

By 1879, Edison and other researchers expected such electric-lighting, using incandescent lamps, to replace indoor gas-lighting by improving three user-experiences: 1.) lower fire and explosion risks, with minimal shock-risks; 2.) no more “fouling” of the users’ air (gas lights created heat and consumed oxygen); and 3.) higher quality, steadier light (not flickering), with more natural colors.

Yet, a fourth user-experience, lamp-life––determined by filament durability––was a problem. Researchers had developed over twenty lamps, but the best still only had a duration of fourteen hours. Most researchers focused primarily on this one problem. Edison’s team also worked on it, eventually improving lamp durability to over 1200 hours. However, this team uniquely realized that increased durability, while necessary, was insufficient. Another crucial experience was the user’s total cost. Understanding this experience would require analyzing the interactions of the lamps with the electric power.

Of course, market-place success for the electric-lighting system would require that its total costs for users be competitive––close to if not below gas-lighting––and allow the company adequate profit margin. However, the model, using then-common assumptions about incandescent-lamp design, projected totally unaffordable costs for the electric system.

All other developers of incandescent lamps before 1879 used a filament with low resistance. They did so because their key goal was lasting, stable incandescence––and low-resistance increased energy-flow, thus durable incandescence. However, low resistance led to a loss of energy in the conductors, for which developers compensated by increasing the cross-section area of the conductors, and that meant using more copper which was very expensive in large quantities. Citing a 1926 essay attributed to Edison, historian Thomas Hughes quotes Edison on why low-resistance filaments were a major problem:

“In a lighting system the current required to light them in great numbers would necessitate such large copper conductors for mains, etc., that the investment would be prohibitive and absolutely uncommercial. In other words, an apparently remote consideration (the amount of copper used for conductors), was really the commercial crux of the problem.”

Thus, the cost of the electric power would be the key to the user’s total-cost of electric lighting, and that power-cost would be driven most crucially by the lamp’s design. Applying the science of electrical engineering (as discussed in the section on “science-based product-innovation,” later in this Part One), Edison––with key help from Upton–– discovered the most important insight of their project. As Hughes writes in 1979, Edison “…realized that a high-resistance filament would allow him to achieve the energy consumption desired in the lamp and at the same time keep low the level of energy loss in the conductors and economically small the amount of copper in the conductors.” Lamp design was crucial to total cost, not due to the lamp’s manufacturing cost but its impact on the cost of providing electricity to it. What mattered was the whole value-delivery-system.

Rethinking generation, Edison found that dynamos––the existing giant generators––were unacceptably inefficient (30-40%). So, he designed one himself with an efficiency rate exceeding ninety percent. Distribution cost was also reduced by Edison’s clever “feeder-and-main” reconfiguration of distribution lines.

Success, however, would also require individual lamp-control. Streetlamps, the only pre-Edison lighting, were connected in series—a single circuit, which connected the lamps and turned them all on and off at once; and if one lamp burned out, they all went out. Edison knew that indoor lighting would only be practical if each lamp could be turned on or off independently of the others––as gas lamps had always allowed. So, Edison developed a parallel circuit––current flowed around any lamps turned off and allowed each lamp to be turned on or off individually.

As historian Paul Israel writes, “Edison was able to succeed where others had failed because he understood that developing a successful commercial lamp also required him to develop an entire electrical system.” Thus, Edison had devised an integrated value-delivery-system––all components designed to work together, delivering a unifying value proposition. Users were asked to switch indoor-lighting from gas to electric, in return for superior safety, more comfortable breathing, higher light-quality, and equal individual lamp-control, all at lower cost. The value proposition was delivered by high-resistance filaments, more efficient dynamos, feeder-and-main distribution, and lamps connected in parallel. Seeing this interdependence, Edison writes (in public testimony he gave later), as quoted by Hughes:

It was not only necessary that the lamps should give light and the dynamos generate current, but the lamps must be adapted to the current of the dynamos, and the dynamos must be constructed to give the character of current required by the lamps, and likewise all parts of the system must be constructed with reference to all other parts…  The problem then that I undertook to solve was stated generally, the production of the multifarious apparatus, methods, and devices, each adapted for use with every other, and all forming a comprehensive system.

In its first century, GE repeatedly used this strategic model of integration around customer-experiences. However, its electric lighting-and-power system was still incomplete in the late 1880s. Its two components–––first power, then lighting––faced crises soon enough.

In 1886, George Westinghouse introduced high-voltage alternating-current (AC) systems with transformers, enabling long-distance transmission, reaching many more customers. Previously a customer-focused visionary, Edison became internally-driven, resisting AC, unwilling to abandon his direct-current (DC) investment, citing safety––unconvincingly. His bankers recognized AC as inevitable and viewed lagging behind Westinghouse as a growing crisis. They forced Edison to merge with an AC competitor, formally creating GE in 1892 (dropping the name of Edison, who soon left). GE would finally embrace AC.

However, AC was more problematic. Its adoption was slowed by a chronic loss of power in AC motors and generators, due to magnetism. This flaw could only be discovered after the device was built and tested. Then, in 1892, brilliant young German American mathematician and electrical engineer Charles Steinmetz published his law of hysteresis, the first mathematical explanation of magnetism in materials. Now that engineers could minimize such losses while devices were still in design, AC’s use became much more feasible.

In 1893 Steinmetz joined GE. He understood the importance of AC for making electricity more widely available and helped lead GE’s development of the first commercial three-phase AC power system. It was he who understood its mathematics, and with his leadership it would prevail over Westinghouse’s two-phase system.

However, the first generators were powered by reciprocal steam engines and engineers knew that AC needed faster-revolving generators. A possible solution was Thomas Parson’s steam turbine which Westinghouse, after buying its US rights in 1895, was improving. Then in 1896, engineer-lawyer Charles Curtis developed and patented a design using aspects of Parsons’ and others.’ In part, his design made better use of steam-energy.

Yet, when he approached GE, some––including Steinmetz––were skeptical. However, GE––with no answer for the imminent Parsons/Westinghouse turbine––stood to lose ground in the crucial AC expansion. Curtis won a meeting with GE technical director Edwin Rice who, along with CEO Charles Coffin, understood the threat and agreed to back Curtis.

The project struggled, as Westinghouse installed their first commercial steam turbine in 1901. However, GE persisted, adding brilliant GE engineer William Emmet to the team, which in 1902 developed a new vertical configuration saving space and cost. Finally, in 1903 the Curtis turbine succeeded, at only one-eighth the weight and one-tenth the size of existing turbines, yet the same level of power. With its shorter central-shaft less prone to distortion, and lower cost, GE’s value proposition for AC to power-station customers was now clearly superior to that of Westinghouse. After almost missing the AC transition, GE had now crucially expanded the power component of its core electric value-delivery-system.

As of 1900, GE’s electric lighting business––complement to its electric power business––had been a major success. However, with Edison’s patents expiring by 1898, the lighting business now also faced a crisis. GE’s incandescent lamp filaments in 1900 had low efficiency––about 3 lumens per watt (LPW), no improvement since Edison. Since higher efficiency would provide users with more brightness at the same power and electricity-cost, other inventers worked on this goal, using new metal filaments. GE Labs’ first director Willis Whitney discovered that at extremely high temperatures, carbon filaments would assume metallic properties, increasing filament efficiency to 4 LPW. However, then new German tantalum lamps featured 5 LPW, taking leadership from GE, 1902-11.

Meanwhile, filaments emerged made of tungsten, with a very high melting point and achieving a much-better 8 LPW. GE inexpensively acquired the patent for these––but the filaments were too fragile. As the National Academy of Sciences explains, “The filaments were brittle and could not be bent once formed, so they were referred to as ‘non-ductile’ filaments.” Then, GE lab physicist and electrical engineer William Coolidge, hired by Whitney, developed a process of purifying tungsten oxide to produce filaments that were not brittle at high temperatures.

Coolidge’s ductile tungsten was a major success, with efficiency of 10 LPW, and longer durability than competitive metal filaments. Starting in 1910, GE touted the new lamp’s superior efficiency and durability, quickly restoring leadership in the huge incandescent lamp market. This strong position was reinforced in 1913 when another brilliant scientist hired by Whitney, Irving Langmuir, demonstrated doubling the filament’s life span by replacing the lamp’s vacuum with inert gas. The earlier-acquired tungsten-filament patent would protect GE’s lighting position well into the 1930s. Thus, design of the incandescent lamp, and the optimization of the value proposition that users would derive from it, were largely done.

Thus, the company had successfully expanded and completed its electric lighting-and-power value-delivery-system. After initially building that great system, GE had successfully met the AC crisis in the power business, and then the filament-efficiency crisis in lighting. GE had successfully built its stable, key long-term core business. 

GE’s customer-focused product-innovation in medical imaging (1913 and 1970-80)

Like its history in lighting-and-power, GE built and later expanded its customer-focused product innovation in medical imaging, profitably delivering sustained superior value.

Shortly after X-rays were first discovered in 1895, it was found that they pass through a body’s soft tissue, but not hard tissue such as bones and teeth. It was also soon discovered that such X-ray contact with hard tissue would therefore produce an image on a metal film––similarly to visible light and photography. The medical community immediately recognized X-rays’ revolutionary promise for diagnostic use.

However, X-rays and these images were produced by machines known as X-ray tubes, which before 1913 were erratic and required highly skilled operators. The work of GE’s Coolidge, with help from Langmuir, would transform these devices, allowing them to deliver their potential value proposition––efficiently enabling accurate, reliable diagnosis by radiology technicians and physicians. We saw earlier that Edison’s team did not merely focus narrowly on inventing a light-bulb––the incandescent lamp––but rather designing the entire value-delivery-system for electric lighting-and-power. Similarly, Coolidge and Langmuir did not narrowly focus solely on producing X-rays, but on understanding and redesigning the whole medical-imaging value-delivery-system that used X-rays to capture the images.

The basis for X-ray tubes was “vacuum tubes.” First developed in the early 1900s, these are glass cylinders from which all or most gas has been removed, leaving an at-least partial vacuum. The tube typically contains at least two “electrodes,” or contacts––a cathode and an anode. By 1912, potentially useful vacuum tubes had been developed but were unpredictable and unreliable.

X-rays are produced by generating and accelerating electrons in a special vacuum-tube. When the cathode is heated, it emits a stream of electrons that collide with the anode. In thus decelerating, most of the electron’s energy is converted to heat, but about one percent of its energy is converted to X-rays. Then, in 1912 Coolidge, who had improved incandescent lamps with ductile tungsten, suggested replacing the platinum electrodes, then used in X-ray tubes, with tungsten. Tungsten’s high atomic number produced higher-energy X-rays, and its high melting point enabled good performance in the tube’s high-heat conditions.

The early X-ray tubes also had a lot of residual gas, used as source of electrons to generate X-rays. In 1913, however, Langmuir showed the value of “high-vacuum” tubes, i.e., with no residual gas––produced with processes he had used earlier in improving incandescent lamps. Then, he discovered they could get a controlled emission of electrons by using one of Coolidge’s hot tungsten cathodes in a high vacuum. Coolidge quickly put a heated tungsten cathode in an X-ray tube, with a tungsten anode. This “hot cathode, Coolidge Tube” provided the first stable, controllable X-ray generator.                                                                                

Experiences of users––radiologists––were clearly improved dramatically. As Dr. Paul Frame of Oak Ridge Associated Universities writes:

Without a doubt, the single most important event in the progress of radiology was the invention by William Coolidge in 1913 of what came to be known as the Coolidge x-ray tube. The characteristic features of the Coolidge tube are its high vacuum and its use of a heated filament as the source of electrons… The key advantages of the Coolidge tube are its stability and the fact that the intensity and energy of the x-rays can be controlled independently… The high degree of control over the tube output meant that the early radiologists could do with one Coolidge tube what before had required a stable of finicky cold cathode tubes.

GE’s innovation had transformed the value of using X-ray tubes. The same basic design for X-ray tubes is still used today in medicine and other fields. They had not invented the device but thinking through the most desirable user-experiences and redesigning the X-ray value-delivery-system accordingly, they produced the key innovation.

Expanding its 1913 medical-imaging value-delivery-system, nearly seventy years later (1980), GE successfully developed and introduced the first practical, commercial magnetic-resonance-imaging (MRI) machine. Despite important differences between the 1913 and 1980 innovations, they shared much the same fundamental value proposition––enabling more useful medical diagnosis through interior-body images. Without GE’s long previous experience delivering medical-imaging value, its 1980-83 MRI effort––mostly then seen as a low-odds bet––would have been unlikely. In fact, it proved a brilliant natural extension of GE’s X-ray-based value propositions. As Paul Bottomley––leading member of the GE MRI team––exclaims in 2019, “Oh, how far it’s come! And oh, how tenuous MRI truly was at the beginning!”

MRI scans use strong magnetic fields, radio waves, and computer analysis to produce three-dimensional images of organs and other soft tissue. These are often more diagnostically useful than X-ray, especially for some conditions such as tumors, strokes, and torn or otherwise abnormal tissue. MRI introduced a new form of medical imaging, providing a much higher resolution image, with significantly more detail, and avoiding X-rays’ risk of ionizing radiation. While it entails a tradeoff of discomfort––claustrophobia and noise––the value proposition for many conditions is clearly superior on balance.

However, in 1980 the GE team was betting on unproven, not-yet welcome technology. GE and other medical-imaging providers were dedicated to X-ray technology, seeing no reason to invest in MRI. Nonetheless, the team believed that MRI could greatly expand and improve the diagnostic experience for many patients and their physicians while producing ample commercial rewards for GE.

In the 1970s, X-ray technology was advanced via CT scan––computed tomography––which GE licensed and later bought. CT uses combinations of X-ray images, computer-analyzed to assemble virtual three-dimension image “slices” of a body. In the 1970s GE developed faster CT scanning, producing sharper images. The key medical imaging advancement after 1913, however, would be MRI.

The basis for MRI was the physics phenomenon known as nuclear magnetic resonance (NMR), used since the 1930s in chemistry to study the structure of a molecule. Some atoms’ nuclei display specific magnetic properties in the presence of a strong magnetic field. In 1973, two researchers suggested that NMR could construct interior-body images, especially soft tissues––the idea of MRI––later winning them the Nobel Prize.

In the mid-1970s, while earning a Ph.D. in Physics, Bottomley became a proponent of MRI. He also worked on a related technology, MRS––magnetic resonance spectroscopy––but he focused on MRI. Then, in 1980 he interviewed for a job at GE, assuming that GE was interested in his MRI work. However, as he recounts:

I was stunned when they told me that they were not interested in MRI. They said that in their analysis MRI would never compete with X-ray CT which was a major business for GE. Specifically, the GE Medical Systems Division in Milwaukee would not give them any money to support an MRI program. So instead, the Schenectady GE group wanted to build a localized MRS machine to study cardiac metabolism in people.

Bottomley showed them his MRS work and got the job but began pursuing his MRI goal. Group Technology Manager Red Reddington also hired another MRI enthusiast William Edelstein, Aberdeen graduate, who worked with Bottomley to promote an MRI system using a superconducting magnet. The intensity of magnetic field produced by such a magnet––its field strength–––is measured in “T” (Tesla units). GE was then thinking of building an MRS machine with a “0.12 T” magnet. Instead, Bottomley and Edelstein convinced the group to attempt a whole-body MRI system with the highest field-strength magnet they could find. The highest-field quote was 2.0 T from Oxford Instruments who were unsure if they could achieve this unprecedented strength. In Spring of 1982, unable to reach 2.0 T, Oxford finally delivered a 1.5 T––still the standard for most MRI’s today.

By that summer, the 1.5 T MRI produced a series of stunning images, with great resolution and detail. The group elected to present the 1.5 T product, with its recently produced images, at the major radiological industry conference in early 1983. This decision was backed by Reddington and other senior management, including by-then CEO Jack Welch––despite some risk, illustrated by reaction at the conference.

We were totally unprepared for the controversy and attacks on what we’d done. One thorn likely underlying much of the response, was that all of the other manufacturers had committed to much lower field MRI products, operating at 0.35 T or less. We were accused of fabricating results, being corporate stooges and told by some that our work was unscientific. Another sore point was that several luminaries in the field had taken written positions that MRI was not possible or practical at such high fields.

Clearly, however, as the dust settled, the market accepted the GE 1.5T MRI:

Much material on the cons of high field strength (≥0.3 T) can be found in the literature. You can look it up. What is published–right or wrong–lasts forever. You might ask yourself in these modern times in which 1.5 T, 3 T and even 7 T MRI systems are ordinary: What were they thinking? … All of the published material against high field MRI had one positive benefit. We were able to patent MRI systems above a field strength of 0.7 T because it was clearly not “obvious to those skilled in the art,” as the patent office is apt to say.

In 2005, recognizing Edelstein’s achievements, the American Institute of Physics said that MRI “is arguably the most innovative and important medical imaging advance since the advent of the X-ray at the end of the 19th century.” GE has led the global Medical Imaging market since that breakthrough. As Bottomley says, “Today, much of what is done with MRI…would not exist if the glass-ceiling on field-strength had not been broken.”

*   *   *

Thus, as it had done with its electric lighting-and-power system, GE built and expanded its value-delivery-system for medical-imaging. In 1913, the company recognized the need for a more diagnostically valuable X-ray imaging system––both more stable and efficiently controllable. GE thus innovatively transformed X-ray tubes––enabling much more accurate, reliable, efficient medical diagnosis of patients for physicians and technicians.

Later, the GE MRI team’s bet on MRI in 1980 had looked like a long-shot. However, that effort was essentially an expansion of the earlier X-ray-based value proposition for medical imaging. The equally customer-focused MRI bet would likewise pay off well for patients, physicians, and technicians. For many medical conditions, these customers would experience significantly more accurate, reliable, and safe diagnoses. This MRI bet also paid off for GE, making Medical Imaging one of its largest, most profitable businesses.

Science-based product innovation

We started this Part One post by suggesting that GE applied three strategic principles in its first century––though later largely abandoned––starting with the above-discussed principle of customer-focused product-innovation. Now we turn to the second of these three principles––science-based product innovation.

In developing product-innovations that each helped deliver some superior value proposition, GE repeatedly used scientific knowledge and analysis. This science-based perspective consistently increased the probability of finding solutions that were truly different and superior for users. Most important GE innovations were either non-obvious or not possible without this scientific perspective. Thus, to help realize genuinely different, more valuable user-experiences, and to make delivery of those experiences profitable and proprietary (e.g., patent-protected), GE businesses made central use of science in their approach to product innovation.

This application included physics and chemistry, along with engineering knowledge and skills, supported by the broad use of mathematics and data. It was frequently led by scientists, empowered by GE to play a leading, decision-making role in product innovation. After 1900, building on Edison’s earlier product-development lab, GE led all industry in establishing a formal scientific laboratory. It was intended to advance the company’s knowledge and understanding of the relevant science, but primarily focused on creating practical, new, or significantly improved products.

Throughout GE’s first century, its science-based approach to product innovation was central to its success. After 1980, of course, GE did not abandon science. However, it became less focused on fundamental product-innovation, more seduced by the shorter-term benefits of marginal cost and quality enhancement, and thus retreated from its century-long commitment to science-based product-innovation. Following are some examples of GE’s success using that science-based approach most of its first century.

*   *   *

Edison was not himself a scientist but knew that his system must be science-based, with technical staff. As Ernest Freeberg writes:

What made him ultimately successful was that he was not a lone inventor, a lone genius, but rather the assembler of the first research and development team, at Menlo Park, N.J.

Edison built a technically-skilled staff. Most important among them was Francis Upton, Princeton/Berlin mathematician-physicist. Edison had a great use for scientific data and expertise—Upton had studied under physicist Hermann von Helmholtz, and now did research and experimentation on lamps, generators and wiring systems. He was invaluable to Edison by, as Hughes describes, “bringing scientific knowledge and methods to bear on Edison’s design ideas.”

Another lab assistant later described Upton as the thinker and “conceptualizer of systems”:

It was Upton who coached Edison about science and its uses in solving technical problems; it was Upton who taught Edison to comprehend the ohm, volt, and weber (ampere) and their relation to one another. Upton, for instance, laid down the project’s commercial and economic distribution system and solved the equations that rationalized it. His tables, his use of scientific units and instruments, made possible the design of the system.

As discussed earlier, the key insight for the lighting-and-power project was the importance of using high-resistance filaments in the incandescent lamps. Edison only got to this insight with Upton’s crucial help. Upton had helped Edison understand Joules’ and Ohm’s fundamental laws of electronics, which show the relationships between current and resistance. They needed to reduce the energy lost; using these relationships, they saw from Joule’s law that they could reduce the energy lost if, instead of using more copper, they reduced the current. Bringing in Ohm’s law––resistance equals voltage divided by current––meant that increasing resistance, at a given level of voltage, would proportionately reduce current. As historian David Billington explains[2]:

Edison’s great insight was to see that if he could instead raise the resistance of the lamp filament, he could reduce the amount of current in the transmission line needed to deliver a given amount of power. The copper required in the line could then be much lower.

Thus, as Israel continued:

Edison realized that by using high-resistance lamps he could increase the voltage proportionately to the current and thus reduce the size and cost of the conductors.

These relationships of current to resistance are elementary to an electrical engineer today, but without Upton and Edison thoughtfully applying these laws of electronics in the late 1870s, it is doubtful they would have arrived at the key conclusion about resistance. Without that insight, the electric lighting-and-power project would have failed. Numerous factors helped the success of that original system, and that of the early GE. Most crucial, however, was Edison and Upton’s science-based approach.

That approach was also important in other innovations discussed above, such as:

  • Steinmetz’ law of hysteresis and other mathematical and engineering insights into working with AC and resolving its obstacles
  • Langmuir’s scientific exploration and understanding of how vacuum tubes work, leading to the high-vacuum tube and contributing to the Coolidge X-ray tube
  • Coolidge’s developed deep knowledge of tungsten and his skill in applying this understanding to both lamp filaments and then design of his X-ray tube
  • And as Bottomley writes, the MRI team’s “enormous physics and engineering effort that it took to transform NMR’s 1960’s technology–which was designed for 5-15 mm test tube chemistry–to whole body medical MRI scanners”

Another example (as discussed in the section on “synergistic product-innovation,” later in this Part One), is the initial development of the steam turbine for electric power generation, and then its later reapplication to aviation. That first innovation––for electric power––obviously required scientific and engineering knowledge. The later reapplication of that turbine technology to aviation was arguably an even more creative, complex innovation. Scientific understanding of combustion, air-compression, thrust, and other principles had to be combined and integrated.

Finally, the power of GE’s science-based approach to product innovation was well illustrated in its plastics business. From about 1890 to the 1940s, GE needed plastics as electrical insulation––non-conducting material––to protect its electrical products from damage by electrical current. For the most part during these years, the company did not treat plastics as an important business opportunity, or an area for major innovation.

However, by the 1920s the limitation of insulation was becoming increasingly problematic. Many plastics are non-conductive because they have high electrical resistance, which decreases with higher temperature. As power installations expanded and electrical motors did more work, they produced more heat; beyond about 265˚ Fahrenheit (later increased to about 310˚ F), insulation materials would fail. Equipment makers and customers had to use motors and generators made with much more iron and copper––thus higher cost––to dissipate the heat.

Then, in 1938-40, brilliant GE chemist Eugene Rochow created a clearly superior, GE-owned solution to this high-temperature insulation problem––a polymer raising the limit to above 356˚ F. It also proved useful in many other applications––sealants, adhesives, lubricants, personal-care products, biomedical implants, and others, and thus became a major business for GE.

Rochow’s success was the first major application of a rigorously science-based approach to product-innovation in GE plastics. This breakthrough traced to insights resulting from his deep knowledge of the relevant chemistry and his application of this understanding to find non-obvious solutions. This success gave GE confidence to take plastics-innovation more seriously, going well beyond insulation and electrical-products support. The science-based approach would be used again to create at least three more new plastics.

In 1938, Corning Glass Works announced a high-temperature electrical insulator called silicone––pioneered years earlier by English chemist Frederick Kipping. Corning hoped to sell glass fibers in electrical insulation. They asked GE––with whom they had previously collaborated––to help them find a practical manufacturing method for the new polymer. Based on this history, they would later complain that Rochow and GE had stolen their innovation.

In reality, Rochow thought deeply about the new polymer, and with his science-based approach realized that it had veryhigh-temperature insulation-potential. However, he also concluded that Corning’s specific design would lose its insulating strength with its––very likely––extended exposure to very high temperatures. Rochow then designed a quite different design for silicone that avoided this crucial flaw.

To illustrate Rochow’s science-based approach, following is a partial glimpse of his thought process. He was an inorganic chemist (unlike the mostly organic chemists in GE and Corning). A silicone polymer’s “backbone” chain is inorganic––alternating silicon and oxygen atoms, no carbon. As was desirable, this structure helps make silicones good insulators, even at high temperatures. However, for product-design flexibility, most silicones are also partly organic––they have what are called organic “side groups” of carbon atoms, attached to silicon atoms. The Kipping/Corning’s side groups were known as ethyl phenyl. As Rochow recounted in his 1995 oral history:

I thought about it. “Ethyl groups—what happens when you heat ethyl phenyl silicone up to thermal decomposition [i.e., expose it to very high temp]? Well, it carbonizes. You have carbon-carbon bonds in there… You have black carbonaceous residue, which is conducting. [i.e., the silicone will lose its insulating property]. How can we avoid that? Well, by avoiding all the carbon-carbon bonds. What, make an organosilicon derivative [e.g., silicone] without any carbon-carbon bonds? Sure, you can do it. Use methyl. Kipping never made methyl silicone. Nobody else had made it at that time, either. I said, in my stubborn way, ‘I’m going to make some methyl silicone.’”

Rochow did succeed in making methyl silicone and demonstrated that it delivered a clearly superior value proposition for high-temperature electrical insulating. In contrast to the Corning product, this GE silicone maintained its insulating performance even after extended high-temperature exposure. Corning filed patent interferences, losing on all five GE patents. It also proved valuable in other applications––sealants, adhesives, lubricants, personal-care products, biomedical implants, and others. It took ten years to be profitable, but silicones became a major category, making GE a leading plastics maker. The same fundamentally science-based approach applied by Rochow would pay off again repeatedly with new plastics innovations in the rest of GE’s first century.

In 1953 chemist Daniel Fox, working to improve GE’s electrical-insulating wire enamel, discovered polycarbonates (PC). This new plastic showed excellent properties[3]––more transparent than glass, yet very tough––thus, it looked like acrylic but was much more durable. It was also a good insulator, heat resistant, and able to maintain properties at high temperatures––and became a very large-volume plastic.[4]

This discovery is often described as accidental. Luck was involved, as Fox had not set out intending to invent a material like PC. However, his science-based instinct led to the discovery. PC were in the “carbonates” category, long known to easily hydrolyze––break down by reaction with water; thus, researchers had given up on carbonates as unusable. Fox, however, remembered that as a graduate student he had once encountered a particular carbonate that, as part of an experiment, he needed to hydrolyze––yet, hard as he tried, he could not induce it to hydrolyze. Following that hint, he explored and found, surprisingly, that PC indeed do not hydrolyze; thus, they could be viable as a new plastic.

Asked in 1986 why no one else discovered PC before him, the first reason he gave was that “…everyone knew that carbonates were easily hydrolyzed and therefore they wouldn’t be the basis for a decent polymer.” Fox described himself as an “opportunist.” Not necessarily following rote convention, he was open to non-obvious routes, even ones that “everyone knew” would not work. Fox recounted how one of his favorite professors, “Speed” Marvel at Illinois, “put forth a philosophy that I remember”:

For example, he [Marvel] told the story of a couple of graduate students having lunch in their little office. One of them ate an apple and threw the apple core out the window. The apple core started down and then turned back up[5]. So, he made a note in his notebook, “Threw apple core out the window, apple core went up,” and went back to reading his magazine. The other man threw his banana peel out the window. It started down and then it went up. But he almost fell out of the window trying to see what was going on. Marvel said, “Both of those guys will probably get Ph.D.’s, but the one will be a technician all of his life.”

I remember that. First you must have all the details, then examine the facts. He that doesn’t pursue the facts will be the technician, Ph.D. or not.

Fox’s insightful, if lucky, discovery proved to be a watershed for GE. Following the major silicones and PC successes, GE was finally convinced that it could––and should aggressively try to––succeed in developing major new plastics, even competing against the chemical giants. Applying the same science-based approach, GE produced the first synthetic diamonds and later, two additional major new products––Noryl and Ultem. These two made important improvements in high heat-resistance, among other valuable experiences, at lower costs than some other high performance plastics.

GE thus helped create today’s plastics era, which delivers low costs and functional or aesthetic benefits. On the downside, as the world has increasingly learned, these benefits are often mixed with an unsettling range of environmental/safety threats. The company was unaware of many of these threats early in its plastics history. Still, it regrettably failed to aggressively apply its science-based capabilities to uncover––and address––some of these threats earlier, at least after 1980. Nonetheless, during GE’s first century, its science-based approach was instrumental in its great record of product innovation.

Of course, most businesses today use science to some degree and claim to be committed to product (or service) innovation. However, many of these efforts are in fact primarily focused on marginal improvements––in costs, and in defects or other undesired variations––but not on fundamental improvements in performance of great value to customers. After its first century, GE––as discussed later in Parts Two-Four of this series––would follow that popular if misguided pattern, reducing its emphasis on science-based breakthrough innovation, in favor of easier, but lower-impact, marginal increases in cost and quality. Prior to that shift, however, science-based product-innovation was a second key aspect––beyond its customer-focused approach––of GE’s first-century product innovation.

Synergistic product innovation

Thus, the two key strategic principles discussed above were central to GE’s product-innovation approach in that first century––it was customer-focused and science-based. An important third principle was that, where possible, GE identified or built, and strengthened synergies among businesses. These were shared strengths that facilitated product-innovation of value to customers.

In GE’s first century, all its major businesses––except for plastics––were related to and synergistically complementary with one or more other generally electrically-rooted GE businesses. Thus, though diverse, its (non-plastics) businesses and innovations in that first era were related via electricity or other aspects of electromagnetism, sharing fundamental customer-related and science-based knowledge, and understanding. This shared background and perspective enabled beneficial, synergistically shared capabilities. These synergies included sharing aspects of the businesses’ broadly defined value propositions and sharing some technology for delivering important experiences in those propositions.

First and perhaps most obvious example of this electrical-related synergy was the Edison electric lighting-and-power system. GE’s providing of electrical power was the crucial enabler of its lighting, which in turn provided the first purpose and rationale for the power. Later, expanding this initial synergistic model, GE would introduce many other household appliances, again both promoting and leveraging the electric power business. These included the first or early electric fan, toaster, oven, refrigerator, dishwasher, air conditioner, and others.

Many of these GE products, more than major breakthrough-innovations, were fast-followers of other inventors eager to apply electricity. An exception, Calrod––electrically-insulating but heat-conducting ceramic––made cooking safer. Without a new burst of innovation, GE appliances eventually became increasingly commoditized. Perhaps that decay was a failure of imagination more than inevitable in the face of Asian competition as many would claim later. In any case, appliances were nonetheless a large business for all of GE’s first century, thanks to its original synergies.

Another example of successful innovation via electrical or electromagnetic synergy was GE’s efforts with radio. The early radio transmitters of the 1890s generated radio-wave pulses, only able to convey Morse code dots-and-dashes––not audio. In 1906, recognizing the need for full-audio radio, a GE team developed what they termed a “continuous wave” transmitter. Physicists had suggested that an electric alternator––a device that converts DC to AC––if run with high enough cycle-speed, would generate electromagnetic radiation. It thus could serve as a continuous-wave (CW) radio transmitter.

GE’s alternator-transmitter seemed to be such a winner that the US government in 1919 encouraged the company’s formation of Radio Corporation of America––to ensure US ownership of this important technology. RCA started developing a huge complex for international radio communications, based on the giant alternators. However, this technology was abruptly supplanted in the 1920s by short-wave signals’ lower transmission cost, enabled by the fast-developing vacuum-tube technology. Developing other radio-related products, RCA still became a large, innovative company, but anti-trust litigation forced GE to divest its ownership in 1932.

Nonetheless, the long-term importance of GE’s CW innovation was major. This breakthrough––first via alternators, then vacuum-tube and later solid-state technology––became the long-term winning principle for radio transmission. Moreover, GE’s related vacuum-tube contributions were major, including the high-vacuum. And, growing out of this radio-technology development, GE later made some contributions to TV development.

However, beyond these relatively early successes, a key example of great GE synergy in that first century is the interaction of power and aviation. Major advances in flight propulsion––creating GE’s huge aviation business––were enabled by its earlier innovations, especially turbine technology from electric power but also tungsten metallurgy from both lighting and X-ray tubes. Moreover, this key product-innovation synergy even flowed in both directions, as later aviation-propulsion innovations were reapplied to electric power.

Shortly after steam turbines replaced steam engines for power generation, gas turbines emerged. They efficiently converted gas into great rotational power, first spinning an electrical generator. During the First World War, GE engineer Sanford Moss invented a radial gas compressor, using centrifugal force to “squeeze” the air before it entered the turbine. Since an aircraft lost half its power climbing into the thin air of high altitudes, the US Army heard of Moss’s “turbosupercharger” compressor and asked for GE’s help. Moss’ invention proved able to sustain an engine’s full power even at 14,000 feet altitude. GE had entered the aviation business, producing 300,000 turbo superchargers in WWII. This success was enabled by GE’s knowledge and skills from previous innovations. Austin Weber in a 2017 article quotes former GE exec Robert Garvin:

“Moss was able to draw on GE’s experience in the design of high rotating-speed steam turbines, and the metallurgy of ductile tungsten and tungsten alloys used for its light bulb filaments and X-ray targets, to deal with the stresses in the turbine…of a turbo supercharger.”

GE’s delivery of a winning value proposition to aviation users––focused on increased flight-propulsion power, provided by continued turbine-technology improvements––would continue and expand into the jet engine era. Primary focus would be on military aviation, until the 1970s when GE entered the commercial market. In 1941, Weber continues, the US and British selected GE to improve the Allies’ first jet engine––designed by British engineer Frank Whittle. GE was selected because of its “knowledge of the high-temperature metals needed to withstand the heat inside the engine, and its expertise in building superchargers for high-altitude bombers, and turbines for power plants and battleships.”

In 1942, the first US jet was completed with a GE engine I-A, supplanted in 1943 by the first US production jet engine––J31. Joseph Sorota, on GE’s original team, recounted the experience, explaining that the Whittle engine “was very inefficient. Our engineers developed what now is known as the axial flow compressor.” This compressor is still being used in practically every modern jet engine and gas turbine today. In 1948, as GE Historian Tomas Kellner writes, GE’s Gerhard Neumann improved the jet engine further, with the “variable stator”:

It allowed pilots to change the pressure inside the turbine and make planes routinely fly faster than the speed of sound.

The J47 played a key role as engine for US fighter jets in the Korean war and continued important for military aircraft for another ten years. It became the highest volume jet engine, with GE producing 35,000 engines. The variable stator innovation resulted in the J79 military jet engine in the early 1950s, followed by many other advances in this market.

In the 1970s, the company successfully entered the commercial aviation market, and later became a leader there as well. Many improvements followed, with GE aviation by 1980 reaching about $2.5 billion (about 10% of total GE).

GE’s application to aviation––of turbine and other technology first developed for the electric power business––brilliantly demonstrated the company’s ability to capture powerful synergies, based on shared customer-understanding and technology. Then in the 1950s as Kellner continues:

The improved performance made the aviation engineers realize that their variable vanes and other design innovations could also make power plants more efficient.

Thus, GE reapplied turbine and other aviation innovations––what it now terms aeroderivatives––to strengthen its power business. The company had completed a remarkable virtuous cycle. Like symbiosis in biology, these synergies between two seemingly unrelated businesses were mutually beneficial.

*   *   *

Therefore, in its first century GE clearly had great success proactively developing and using synergies tied to its electrical and electromagnetic roots. As mentioned at the outset of this Part One post, this synergistic approach was the third strategic principle––after customer-focused and science-based––that GE applied in producing its product-innovations during that era. However, while the dominant characteristic of GE’s first century was its long line of often-synergistic product-innovations, that does not mean that GE maximized all its opportunities to synergistically leverage those key roots.

Perhaps an early, partial example was radio, where GE innovated importantly but seemed to give up rather easily on this technology, after being forced to divest RCA. Instead, GE might well have moved on and tried to recapture the lead in this important electrical area.

Then, after 1950 GE again folded too easily in the emerging semiconductor and computer markets. Natural fields for this electrical-technology leader, these markets would be gateways to the vastly significant digital revolution. In the 1970s, or later under Welch, GE still could have gotten serious about re-entering these important markets. It’s hard to believe that GE could not have become a leading player in these if it had really tried. Moreover, beyond missing some opportunities for additional electrically-related synergies, GE also drifted away from strategic dependence on such synergies, by developing plastics. Profitable and rapidly growing, plastics became GE’s sole business not synergistically complementary with its traditional, electrically-related ones.

These probably missed opportunities, and modest strategic drifting, may have foreshadowed some of GE’s future, larger strategic missteps that we will begin to explore in Part Two of this series. Thus, we’ve seen that GE displayed some flaws in its otherwise impressive synergistic approach. Overall, nonetheless, the company’s product-innovation approach and achievements in its first century were largely unmatched. Although later essentially abandoned, this value-delivery-driven model should have been sustained by GE and still merits study and fresh reapplication in today’s world.

In any case, however, by the 1970s GE needed to rethink its strategy. What would be the basis for synergistically connecting the businesses, going forward? How should the company restructure to ensure that all its non-plastics businesses had a realistic basis for mutually shared synergies, allowing it to profitably deliver superior value?

In answering these questions, GE should have redoubled its commitment to customer-focused, science-based product-innovation, supported by synergies shared among businesses. Using such a strategic approach, although GE might have missed some of the short-term highs of the Welch era, it could have played a sustained role of long-term leadership, including in its core energy markets. However, as GE’s first century wound down, this astounding company––that had led the world in creative, value-delivering product-innovation––seemed to lack the intellectual energy to face such questions again.

Instead, as we will see, the company seemed to largely abandon the central importance of product-innovation, growing more enamored of service––profitably provided on increasingly commoditized products. GE would pursue a series of strategic illusions––believing that it controlled its own destiny, as Jack Welch urged––but would ultimately fail as the company allowed the powerful product-innovation skills of its first century to atrophy. So, we turn next to the Welch strategy––seemingly a great triumph but just the first and pivotal step in that tragic atrophy and decline. Thus, our next post of this GE series will be Part Two, covering 1981-2010––The Unsustainable Triumph of the Welch Strategy.


Footnotes––GE Series Part One:

[1] By this time, some street lighting was provided by electric-arc technology, far too bright for inside use

[2] David P. Billington and David P. Billington Jr., Power, Speed, and Form: Engineers and the Making of the Twentieth Century (Princeton: Princeton University Press, 2006) 22

[3] But unfortunately, is made with bisphenol A (BPA) which later became controversial due to health issues

[4] Discovered the same time by Bayer, the two companies shared the rewards via cross-licensing

[5] Presumably meaning it stopped and flew back up, above the window





























GE’s Illusions of Destiny-Controlled…& the World’s Real Losses

Value Delivery in the Rise-and-Decline of General Electric

Introductory Overview to Our Four-Part Series

By Michael J. Lanning––Copyright © 2021 All Rights Reserved

As the world recovers from Covid-19, business leaders more than ever need strategies that profitably deliver superior value to customers. The power of applying––and downside of neglecting––these principles of value delivery are better illustrated by few companies than General Electric and its history.

Jack Welch, GE’s legendary CEO––1981-2001––famously urged businesses to, “Control your own destiny.” By 2001, GE had become the world’s most valued and admired company, apparently fulfilling Welch’s vision of destiny-controlled. However, in retrospect, the company’s greater and more lasting achievements came in the prior, partially forgotten first century––1878-1980. Using principles of value-delivery strategy, GE built a corporate giant on a base of great product-innovation.

Then, however, despite its much celebrated, apparent great triumph, the Welch strategy and its continuation under successor Jeff Immelt largely abandoned those strategic principles. Instead, the company chased self-deceiving illusions, ultimately losing control of its destiny, and declining precipitously––though stable now. The four-part Value-Delivery-Blog series introduced here aims to identify the still-relevant key principles of GE’s historic strategy that, in its first century, made it so important––but which, by their later neglect and decay, eventually led to its major decline and missed-opportunities.

Note from the author: special thanks to colleagues Helmut Meixner and Jim Tyrone for tremendously valuable help on this series.

Part One: Customer-Focused, Science-Based, and Synergistic Product-Innovation (1878-1980)

GE built a corporate giant in its first century via major product-innovations that profitably delivered real value to the world. These innovations followed three principles of value-delivery strategy, discussed in Part One––they were customer-focused, science-based, and where possible, powerfully synergistic among GE businesses.


Following the first, most fundamental of these three principles, GE innovations in that first century were customer-focused––integrated around profitably delivering superior value[1] to customers. Thus, GE businesses delivered winning value propositions––superior combinations of customer-experiences, including costs. We term a business managed in this way a “value-delivery-system,” apt description for most GE businesses of that era.

Part One discusses two key examples of these GE value-delivery-systems successfully created and expanded in that century after 1878. First is the company’s electric lighting-and-power value-delivery-system. It included incandescent-lamps––light bulbs––and the supply of electricity to operate the lighting. Edison and team studied indoor-lighting users’ experiences––with gas-lighting––and designed ways to improve those experiences.

Thus, users were offered a superior value proposition––proposing that they replace their gas-lighting system with Edison’s electric one. As discussed in Part One, users in return got a superior set of experiences, including greater safety, more comfortable breathing––the air no-longer “fouled” by gas––with higher-quality light, shorter-but-adequate lamp-life, and equally convenient operation, all at lower total cost than gas lighting. Profitably delivering this value propositionwas enabled by key elements of lamp-design, more efficient electricity generation-and-distribution, and other important inventions.

This brilliant, integrated system was GE’s founding and most important innovation. Aside from the lamps, consider just electric-power. Discussing the world’s infrequent, truly-breakthrough advances in technology, historian of science Vaclav Smil writes:

And there has been no more fundamental, epoch-making modern innovation than the large-scale commercial generation, transmission, distribution, and conversion of electricity.… the electric system remains Edison’s grandest achievement: an affordable and reliably available supply of electricity has opened doors to everything electrical, to all great second-order innovations ranging from gradually more efficient lighting to fast trains, from medical diagnostic devices to refrigerators, from massive electrochemical industries to tiny computers governed by microchips.

Then, in the first fifteen years of the twentieth century, the company expanded this electric lighting-and-power value-delivery-system. Its new power-innovations made electricity more widely available by helping enable long-distance transmission. Its new lighting innovations produced major advances in lamp efficiency.

A second key GE value-delivery-system discussed in Part One is medical-imaging. Starting with advances to the X-ray tube, GE much later develops the first practical magnetic-resonance-imaging (MRI) machine. These innovations delivered a superior value proposition––first making diagnoses more accurate and reliable, and later, via MRI, safer.

In the great product-innovations of its first century, GE’s first strategic principle was to be customer-focused. As a result, it repeatedly created winning value-delivery-systems––integrating activity around profitable delivery of superior value propositions.

Its second strategic principle was taking a science-based approach to its product-innovation efforts. In that era, GE’s major product-innovations were typically led by highly-knowledgeable physicists, chemists, or other scientists, making extensive use of mathematics and data. The company led all industry in establishing a formal laboratory devoted to scientific research in the context of developing practical, valuable products. This science-based approach increased the likelihood that GE’s product-innovations would result in truly different, meaningfully superior customer experiences. It also helped make delivery of these experiences profitable and proprietary (e.g., patent-protected) for the company.

Rigorous application of scientific theory and data enabled many GE innovations. Edison’s lighting-and-power project developed its key insight––concerning electrical-resistance––by using the early science of electronics. Other examples of the importance of GE’s science-based approach included the work enabling alternating current (AC), improvements to the X-ray tube, use of tungsten-knowledge to improve lamps, and later the extensive physics and engineering work to make MRI a practical reality. Notably, it also transformed GE Plastics–––from providing secondary support for the electrical businesses, to a major business––which we also discuss in Part One.

GE also applied a third key principle––where possible, it identified or built, then reinforced, powerful synergies among its businesses. Based on reapplication of technologies and customer-insights from one GE business to another, these were shared strengths, enabling new GE innovations, further profitably delivering superior value.

In its first century, GE had great success developing synergies related to its electrical and electromagnetic roots. All its major businesses––except plastics––shared these electrical roots. They thus synergistically shared valuable knowledge and insights that were customer-related and science-based. The company’s founding innovation––the Edison lighting-and-power value-delivery-system––was inherently synergistic. Power enabled lighting; lighting justified and promoted power. GE later emulated this synergy by developing its household-appliance business. As discussed in Part One, perhaps the company’s greatest example of synergies between seemingly unrelated businesses was the sharing of turbine technology, first developed for power, then reapplied to aviation.

GE would also fail to realize some synergistic opportunities tied to its electrical roots. Most surprisingly, the company gave up easily on the crucially important markets for semi-conductors and computers, while embracing plastics, its first major non-electrical business. This subtle drifting away from its electrical roots foreshadowed GE’s later, ultimately problematic focus on non-electrical businesses––especially the financial kind. Still, GE’s first century was a tour de force of value-delivery-driven product innovation. After 1981, Welch and team should have better understood and reinvigorated those past strengths.

Part Two: Unsustainable Triumph of the Welch strategy (1981-2010)

For over 20 years after 1981, GE triumphed with Jack Welch’s radical new strategy that included a significant reliance on financial services, a strategy continued by Welch successor Jeff Immelt until about 2010. As discussed in Part Two, this complex strategy ingeniously melded some strengths of GE’s huge and reliably profitable traditional businesses with its aggressive, new financial ones. It enabled the latter to benefit uniquely from GE’s low corporate cost-of-borrowing––due to its huge, profitable, and exceptionally credit-worthy traditional businesses.

This hybrid strategy was largely not available to financial competitors as they lacked GE’s huge traditional product businesses. Thus, with this crucial cost advantage, the new GE financial businesses could profitably deliver a clearly superior value proposition to financial customers. Therefore, these financial businesses grew dramatically and profitably in the Welch era, leading GE’s overall exceptional profitable growth, making GE the world’s most valued and admired corporation by 2001. Yet, there were longer-term flaws hidden in this triumphant strategy.

*   *   *

Through our value-delivery lens we see many businesses’ strategies as what we term “internally-driven.” Managers of an internally-driven business think inside-out, deciding what products and services to offer based on what they believe their business does best, or best leverages its assets and “competencies.” They are driven by their own internal agenda, more than what customers would potentially value most. Even though GE’s financial businesses delivered a profitable, superior value proposition for two decades, the overall Welch strategy increasingly became internally-driven.

Instead of insisting on new product-innovation in the traditional industrial businesses, the old core businesses were used more to help enable the rapid, highly profitable growth of the financial business. Moreover, the hybrid strategy’s advantages and resulting growth were inherently unsustainable much past 2000. Its heady successes likely led Welch’s and later Immelt’s teams to embrace the illusion that GE and they could master financial or any other business, no matter how new and complex. However, the high returns in GE Capital (GEC, its finance business) reflected high risk, not just financial innovation, and higher risk than GE fully understood. These risks very nearly wrecked the company in 2008-09.

In addition, as will be discussed in Part Two, the unique cost advantage of the GE financial businesses (GEC) could not drive much-faster growth for GEC than the non-financial businesses indefinitely. Given the accounting and regulatory rules of the time, GEC’s cost advantage was only available if GEC’s revenues remained less than fifty-percent of total GE. That threshold was reached in 2000. After that point, GE either needed to reduce GEC’s growth rate, or dramatically increase the growth rate for the traditional businesses. Welch was likely aware of this problem in 2000 when he tried to make the gigantic acquisition of Honeywell, but the European regulators rejected this move, on antitrust terms. The limitations of the strategy, and pressures on it, were clearly emerging by 2000.

Thus, the Welch and then Immelt teams needed to replace the GEC-centered strategy, creating a new surge of product-innovation in the non-financial businesses, in order to grow the aging company. Complicating this challenge, however, GE’s internally-driven instinct focused increasingly on financial engineering, efficiency, and optimization. Eventually, its historically crucial capabilities of customer-focused, science-based product innovation, with shared synergies, all weakened. GE’s industrial product-innovation capabilities had atrophied. As USC School of Business Gerard Tellis writes:

Ironically, GE rose by exploiting radical innovations: incandescent light bulbs, electric locomotives, portable X-ray machines, electric home appliances, televisions, jet engines, laser technologies, and imaging devices. Sadly, in the last few decades, that innovation-driven growth strategy gave way to cash and efficiency-focused acquisitions.

R&D spending continued, but the historical science-based focus on product-innovation breakthroughs gave way to a focus on cost, quality and information technology (IT). Moreover, a new synergy was achieved between the traditional and financial businesses, but the historic emphasis on synergy in product-innovation declined.

Welch’s high-octane strategy was not only flawed by being an internally-driven approach but was also unsound in that it was what we term “customer-compelled.” Again, through our value-delivery lens, we see many businesses trying to avoid the obvious risk of internally-driven thinking––the risk of neglecting or even ignoring what customers want. Therefore, they often follow a well-meaning but misguided customer-compelled strategic map, just as GE did during and after Welch. They strive to “be close and listen” to customers, promising “total satisfaction,” but they may fail to discover what experiences customers would most value, often because customers do not know what they would most value. That logical error was a limitation in one of Welch’s key, most widely influential initiatives––the Six Sigma model.

This demanding, data-intensive technique can be very powerful in reducing variances in product performance, highly useful in achieving manufactured products (or services) that much more precisely meet customers’ requested performance. However, it is only effective if we understand correctly what the most valuable dimensions of that performance are. Otherwise, we only get a customer-compelled result, not a very valuable result for the customer. Moreover, the Six Sigma approach could also be used in an internally-driven way. A business may select a dimension of performance for which it knows how to reduce variance, but which may not be a dimension of highest value to the customer. The technique will then lead again to less variance, but not more value for the customer.

Welch was sometimes hailed not only for his––ultimately too clever––strategic hybrid of the industrial and financial businesses. He was also lionized, until the recent years of GE’s decline, for his major emphasis on efficiency, reducing cost and increasing marginal quality, including Six Sigma. These initiatives achieved meaningful cost reduction––a part of delivering value to customers. But by the 2010s (after the financial crisis) many business thinkers changed their tone. In 2012, for example, Ron Ashkenas in HBR writes:

Six Sigma, Kaizen, and other variations on continuous improvement can be hazardous to your organization’s health. While it may be heresy to say this, recent evidence from Japan and elsewhere suggests that it’s time to question these methods… Looking beyond Japan, iconic six sigma companies in the United States, such as Motorola and GE, have struggled in recent years to be innovation leaders. 3M, which invested heavily in continuous improvement, had to loosen its sigma methodology in order to increase the flow of innovation… As innovation thinker Vijay Govindarajan says, “The more you hardwire a company on total quality management, [the more] it is going to hurt breakthrough innovation. The mindset that is needed, the capabilities that are needed, the metrics that are needed, the whole culture that is needed for discontinuous innovation, are fundamentally different.”

In 2000 Jeff Immelt took over as GE CEO. During 2000-2007, Immelt’s GE increased R&D and produced incremental improvements in product cost, quality, and performance, and advances in IT. However, GE needed to return, more fundamentally, to its historical strengths of customer-focused, science-based product-innovation. The company needed to fundamentally rethink its markets and its value-delivery-systems––deeply studying the evolution of customer priorities and the potential of technology and other capabilities to meet those priorities.

They might have discovered major new potential value propositions, and implemented new, winning value-delivery-systems, perhaps driven again by product-innovation as in GE’s first century. However, Immelt and team seemed to lack the mindset and skills for strategic exploration and reinvention. They sensed a need to replace the Welch strategy but depended on GEC for profits. They thus proved, not surprisingly, too timid to act until too late, resulting in near-fatal losses in the 2008 financial crisis. This flirting with disaster was followed by a decade of seemingly-safe strategies, lacking value-delivery imagination.

Part Three: Digital Delusions(2011-18)

Thus, we saw that the Welch finance-focused strategy which seemed so triumphant in 2001 turned out to be tragically flawed, longer-term. It had led the company to largely abandon its earlier, historical commitment to value-delivery-based product-innovation in the traditional, non-financial businesses. In addition, the shiny new finance-focused strategy was riskier than the company understood, until it was too late. Finally, the strategy’s apparent triumph must have encouraged the Welch and Immelt teams to believe they could succeed in any business. That belief was nurtured by some successes without electrical roots––especially GE Plastics, and perhaps NBC––and further by the Management Training program in Crotonville. The myth of GE invincibility in any business would be dismantled, first by the financial crisis but then further in the years after 2011.

Once the GEC-centered strategy imploded, Immelt and GE hoped to replace it with a grand, industrial-related strategy that could drive nearly as much growth as the Welch strategy had. However, such a strategy needed to identify and deliver value propositions centrally important to industrial customers’ long-term success, including by product innovation. GE did focus on a technology for marginally optimizing operational efficiency. Though not the hoped-for sweeping growth strategy, this opportunity could have had value, but unfortunately GE converted it into a grandiose, unrealistic vision it could not deliver.

In 2011, the company correctly foresaw the emerging importance, combined with Big Data Analytics, of what it coined the “industrial internet of things” (IIoT). This technology, GE argued, would revolutionize predictive maintenance––one day eliminating unplanned downtime––saving billions for GE and others. However, no longer a financial powerhouse and now refocusing on its industrial business, GE did not want to be seen as a big-iron dinosaur; it wanted to be cool––it wanted to be a tech company.

So, focused on the IIoT, GE dubbed itself the world’s “first digital industrial company.” It would build the dominant IIoT software platform, on which GE and others would then develop the analytics-applications needed for its predictive-maintenance vision. Immelt told a 2014 GE managers’ meeting, “If you went to bed last night as an industrial company, you’re going to wake up this morning as a software and analytics company.” This initiative was applauded as visionary and revolutionary. HBS’ Michael Porter noted:

“It is a major change, not only in the products, but also in the way the company operates…. This really is going to be a game-changer for GE.”

However, GE’s game-changer proved to be both overreach and strategically incoherent, ultimately changing little in the world. Going back to the Welch era, GE leaders had long believed they could master any business, using GE managerial principles and technique (e.g., Welch’s vaunted systems of professional management and development, the celebrated Six Sigma, its focus on dominant market position in every market, and others). Even though overconfidence had already previously contributed to GE’s financial-business disaster, perhaps we still shouldn’t be surprised by their 2016 stated “drive to become what Mr. Immelt says will be a ‘top 10 software company’ by 2020.” As leadership tells the MIT Sloan Review in 2016:

GE wants to be “unapologetically awesome” at data and analytics… GE executives believe [GE] can follow in Google’s footsteps and become the entrenched, established platform player for the Industrial Internet — and in less than the 10 years it took Google.

The company implausibly declared its intent to lead and control the IIoT/analytics technology, including its complex software and IT. Such control would include application to all industrial equipment owned by GE, its customers, and its competitors. This vision was quixotic, including its inherent conflicts (e.g., competitors and some customers were uninterested in sharing crucial data, or otherwise enabling GE’s control).

GE’s Big Data initiative did offer a value proposition––eliminating unplanned downtime, which sounds valuable––to be achieved via IIoT/analytics. However, how much value could be delivered, relative to its costs and tradeoffs, and the best way to perform such analytics, will obviously vary dramatically among customers. The initiative seemed more rooted in a vision of how great GE would be at IIoT/analytics than in specific value propositions that would be valued by specific segments. It thus betrayed the same internal focus that had plagued the company earlier.

In contrast, Caterpillar––as discussed in Part Three––used IIoT/analytics technology strategically, to better deliver its clear, core value proposition. Developed by deeply studying and living with its customers, Cat’s value proposition focuses on lower total-life equipment cost, for construction and mining customers, provided via superior uptime. Cat also understood, realistically, that it was not trying to become the Google of the industrial internet, but rather would depend heavily on a partner with deep expertise in analytics, not imagining they could do it all themselves. GE’s IIoT initiative was very enthusiastic about the power of data analytics, but it seems plausible that GE never grasped or acted on the importance of in-depth customer-understanding that Cat demonstrated with success.

In addition, the GE strategy did not seem intended to help develop major new product-innovations, as part of delivering winning value propositions. IIoT/analytics technology could possibly enhance continuous improvement in efficiency, cost, and quality, but not help return the company’s focus, as needed, to breakthrough product-innovation. Innovation-consultant Greg Satell noted that GE saved billions in cost by “relentlessly focusing on continuous improvement.” At the same time, he attributes a scarcity in significant inventions at GE since the 1970s to a “lack of exploration.” He writes, “Its storied labs continuously improve its present lines of business, but do not venture out into the unknown, so, not surprisingly, never find anything new.”

Value can be delivered using IIoT/analytics. GE hired hundreds of smart data scientists and other capable people; no doubt GE delivered value for a few customers and could have done more if they had continued investing. Yet, it spent some $4 billion, made some illusory claims to supplying––and controlling––everything in the industrial internet, and ultimately achieved minimal productive result. GE found it could not compete with the major cloud suppliers (e.g., Amazon and Microsoft). More important, effective analytics on the IIoT required vastly greater customization, to specific sectors and individual customers, than GE had assumed––not a single, one-size-fits-all platform. Before giving up in 2018, GE had only convinced 8% of its industrial customers to use its Predix platform.

Part Four: Energy Misfire (2001-2019)

So, after 2001 GE first tried to ride the GEC tiger and was nearly eaten alive in the process. Then it chased the illusion of IIoT dominance, with not much to show for it. Meanwhile, GE was still not addressing its most pressing challenge––its need for new, synergistic strategy for its industrial businesses, to replace the growth engine previously supplied by the finance businesses. Especially important were its core but inadequately understood energy-related markets. In these, GE harbored costly, internally-driven illusions, instead of freshly and creatively identifying potentially winning, energy-related value propositions.

After the financial crisis, the eventual exit from the financial businesses (except for financing specifically to support the sale of GE industrial products) was already very likely. With two-thirds of the remaining company being energy-related, strategy for this sector clearly should have been high priority. However, in its industrial businesses, including energy, GE had long been essentially internally-driven––prone to pursue what it was comfortable doing. (Admittedly, the IIoT venture was an aberration, where GE was not comfortable––for good reason––developing new software, big data analytics, or AI.) Otherwise, however, in the energy related businesses, GE would stay close to what it saw as core competencies, rather than rethinking and deeply exploring what these markets would likely most value in the foreseeable future.

In these markets, GE focused on fuels based on immediate-customers’ preferences, and where GE saw competitive advantage. Although GE developed a wind power business, it largely stayed loyal to fossil fuels. A 2014 interview in Fast Company explains why:

Immelt’s defining achievement at GE looked to be his efforts to move the company in a greener direction [i.e., its PR-marketing campaign, “Ecomagination”].… But GE can only take green so far; this organization fundamentally exists to build or improve the infrastructure for the modern world, whether it runs on fossil fuels or not. Thus, GE has simultaneouslyenjoyed a booming business in equipment for oil and gas fracking and has profited from the strong demand in diesel locomotives thanks to customers needing to haul coal around the country.… In the end, though, it will serve its customers, wherever they do business.

Serving immediate customers can be a good practice. However, it can become counter-productively customer-compelled––if a business ignores unmistakable trends in technology costs, and the preferences of many end-users. With a seemingly safe, internally driven view, GE (in 2014) lost a myopic bet on fossil fuels, acquiring the gas-turbine operations of French company Alstom. GE badly underestimated the rise of renewable energy, resulting in a $23 billion write-down.






As Part Four will review in some detail, this loss on gas turbines capped GE’s more fundamental, long-term failure, starting in about 2000, to realize its great historic opportunity––creatively leading and capitalizing on the global transition to zero-emission energy, especially renewables. The value proposition that most energy end-users worldwide wanted was increasingly clear after 2000––energy with no tradeoffs in reliable, safe performance, zero (not fewer) emissions, and lower cost.

In this failure in the energy market GE no doubt saw itself as following its customer’s demands. In customer-compelled fashion, GE kept listening to its more immediate customers’ increasing hunger for fossil-fuel based energy generation. The company would have needed a much more customer-focused, science-based perspective to see, and help catalyze, the market’s direction. It needed to study end-users, not just power-generators, and project the increasingly inevitable major reductions in renewable-energy costs, to anticipate the shift away from gas and other fossil-fuels––by many years, not just a few months sooner.

Given its historical knowledge and experience, GE was uniquely well positioned to benefit long-term from the world’s likely multi-trillion-dollar investment in this value proposition by 2050. We can acknowledge that, starting in the early 2000s, GE touted its “Ecomagination” campaign, and built a good-sized wind-power business. However, taking this value proposition seriously––leading a global energy transformation––would have required that GE take a more proactively-creative strategic approach. That would have meant designing comprehensive value-delivery-systems needed to lead, accelerate, and long-term profitably capitalize on that global energy transition.

Fundamentally important, those new value-delivery-systems would have needed to include an aggressive return to the central focus on major product-innovation that built the company in its first century. Many people, in the first decade after 2000, including most energy experts, were highly skeptical that a global transition to zero-emission energy was possible in less than most of the current century. Yet, the basic technology needed for the energy transition in electricity and ground-transportation––a crucial bulk of green-house emissions––had been identified by 2000. Renewable energy, especially solar and wind, bolstered by battery for storage––were already identified. As has largely been confirmed in the last twenty years, that technology––not major new, unimagined breakthroughs––only needed to be massively refined and driven down the experienced-based cost curves.

As will be discussed in Part Four, key parts of solar energy technologies not only dropped in cost faster than anyone expected, but also became unprofitable commodities. As GE and many others learned in the early 2000s, producing solar panels in competition with the Chinese makers became a mostly losing business. However, there are many other elements of renewable energy––wind, storage (e.g., batteries), long-distance transmission, etc. Some of these will inevitably be profitable, even if cranking out commodity solar panels is never among them. The increasingly clear reality since 2000 has been that renewable energy will dominate the global market, and GE should have been playing a leading role.

Other technology, perhaps including hydrogen and some as-yet developed battery technologies, will be needed for some industrial and other sources of emissions not likely replaceable by solar or wind. However, for electricity and ground transport, today we know that these now-proven renewable-energy technologies are lower cost than the fossil-fuel incumbents. In the wide range of product innovations needed to enable this part of the energy transition, GE could and should have been leading the world these past twenty years, rather than just issuing PR lip-service with meagre substance.

Such actions to lead the energy transition, discussed in Part Four, would have included aggressive, imaginative value-delivery-systems by GE, involving not just power generation, but also long-distance transmission, electrification of energy uses (e.g., heating, cooling, transportation, industrial processes), storage––especially battery––technology, and energy-focused financing. Sharing a zero-emissions value proposition could have created great synergy across these GE product-innovations and related businesses. GE could have also used its once-great credibility, to influence public policy, including support for economically rational zero-emission technologies. This excludes asking end-users to accept higher costs or other tradeoffs to save the planet. GE could have proactively, creatively generated great corporate wealth if policy had evolved to allow zero-emission solutions to deliver their potentially superior value.

More strategically fundamental than GE’s inept lost bet after 2013 on natural gas turbines, GE’s historic lost opportunity in its core energy markets was this enormous and inevitable transition to zero-emissions. The same customer-focused, science-based, synergistic strategy, including major product-innovation, that characterized GE’s first century––could have and should have been central to the company since 2000.

Eventual impact of GE’s last four decades––on GE and the world

After 1981, GE achieved a twenty-year meteoric rise behind the Welch strategy, but fundamentally failed to extend the creative, persistent focus on superior value-delivery via product innovation that drove the company in its first century. The cumulative effect of the company’s strategic illusions and inadequate value-delivery was that GE eventually lost control of its destiny, along with most of its market-value and importance. Even after the company’s recent strengthening, its market value is still only about 20% of its 2000 peak after the Welch era, and less than 50% of its 2016 value––its post-financial-crisis peak.

The world also incurred losses, including regrettable influences on business. Rushing to emulate GE, many embraced Six Sigma’s focus on continuous marginal improvement, and an idolatry of maximizing shareholder value. For GE and shareholders, these practices yielded some benefits. However, they also coincided with reduced value delivery, including a sharp decline in major product-innovation. We can see some results of this influence in the world’s lagging innovation and productivity, notwithstanding the often-overrated inventions of the finance and tech sectors. Also lost for the world was the value that GE might have delivered, had it seriously acted on its key opportunity in energy. These losses were the tragedy in GE’s lost control of its destiny, in the four decades after 1981.

Accordingly, after this Introductory Overview, the series continues with four parts:

Part One: Customer-Focused, Science-Based Product Innovation (1878-1980)

Part Two: Unsustainable Triumph of the Welch Strategy (1981-2010)

Part Three: Digital Delusions (2011-2018); and

Part Four: Energy Misfire (2001-2019)

Footnotes––GE Series Introductory Overview[1] though of course not using our present-day, value-delivery terminology

Like a Compass, Big Data Helps –– If You Know What Direction to Take

Big Data Can Help Execute
But Not Discover New Growth Strategies

By Michael J. Lanning––Copyright © 2017-18 All Rights Reserved

Business investment in “big data analytics” (or “big data”) continues growing. Some of this growth is a bet that big data will not only improve operations, but reveal hidden paths to new breakthrough growth strategies. This bet rests partly on the tech giants’ famous use of big data. We have recently seen the emergence, in part also using big data, of major new consumer businesses, e.g., Uber and Airbnb––discussed in this second post of our data-and-growth-strategy series. Again using big data, some promising growth-strategies have also emerged recently in old industrials, including GE and Caterpillar (discussed in our next post). Many thus see big data as a road map to major new growth.

Typical of comments on the new consumer businesses, Maxwell Wessell’s HBR article How Big Data is Changing Disruptive Innovation, called Uber and others “data-enabled disruptions.” Indefatigable big-data-fan Bernard Marr wrote that Uber and Airbnb were only possible with “big data and algorithms that drive their individual platforms… [or else] Uber wouldn’t be competitive with taxi drivers.” McKinsey, late 2016, wrote, “Data and analytics are changing the basis of competition. [Leaders] launch entirely new business models…[and] the next wave of disruptors [e.g. Uber, Airbnb are] … predicated on data and analytics.” MIT’s SMR Spring 2017 report, Analytics as Source of Business Innovation, cited Uber and Airbnb as “poster children for data-driven innovation.”

Like a compass, big data only helps once you have a map––a growth-strategy. To discover one, first “become the customer”: explore and analyze their experiences; then creatively imagine a superior scenario of experiences, with positives (benefits), costs, and any tradeoffs, that in total can generate major growth. Thus, formulate a “breakthrough value proposition.” Finally, design how to profitably deliver (provide and communicate) it. Data helps execute, not discover, new growth strategies. Did Uber and Airbnb use data to discover, not just execute, their strategies? Let’s see how these discoveries were made.


Garrett Camp and Travis Kalanick co-founded Uber. Its website claims that, in Paris in late 2008, the two “…had trouble hailing a cab. So, they came up with a simple idea—tap a button, get a ride.” Kalanick later made real contributions to Uber, but the original idea was Garrett Camp’s, as confirmed by Travis’ 2010 blog, and his statement at an early event (quoted by Business Insider) that “Garrett is the guy who invented” the app.

However, the concept did not just pop into Camp’s mind one evening. As described below, the idea’s genesis was in his and friends’ frustrating experiences using taxis. He thought deeply about these, experimented with alternatives, and imagined ideal scenarios––essentially the strategy-discovery methodology mentioned above, and that which we call, “become the customer.” He then recognized, from his own tech background, the seeds of a brilliant solution.

Camp was technically accomplished. He had researched collaborative systems, evolutionary algorithms and information retrieval while earning a Master’s in Software Engineering. By 2008 he was a successful, wealthy entrepreneur, having sold his business, StumbleUpon, for $75M. This “discovery engine” finds and recommends relevant web content for users, using sophisticated algorithms and big data technologies.

As Brad Stone’s piece on Uber in The Guardian recounted earlier this year, the naturally curious Camp, then living in San Francisco, had time to explore and play with new possibilities. He and friends would often go out for dinner and bar hopping, and frequently be frustrated by the long waits for taxis, not to mention the condition of the cars and some drivers’ behavior. Megan McArdle’s 2012 Atlantic piece, Why You Can’t Get a Taxi, captured some of these complaints typical of most large cities:

Why are taxis dirty and uncomfortable and never there when you need them? Why is it that half the time, they don’t show up for those 6 a.m. airport runs? How come they all seem to disappear when you most need them—on New Year’s Eve, or during a rainy rush hour? Women complain about scary drivers. Black men complain about drivers who won’t stop to pick them up.

The maddening experience of not enough taxis at peak times reflected the industry’s strong protection. For decades, regulation limited taxi licenses (“medallions”), constraining competition, and protecting revenue and medallion value. The public complained, but cities’ attempts to increase medallions always met militant resistance; drivers would typically protest at city hall. And if you didn’t like your driver or car, you could leave no tip or complain, but not with any real impact.

Irritated, Camp restlessly searched for ways around these limits. As hailing cabs on the street was unreliable, he tried calling, but that was also problematic. Dispatchers would promise a taxi “In 10 minutes” but it often wouldn’t show; he’d call again but they might lie or not remember him. Next, he started calling and reserving them all, taking the first to arrive, but this stopped working once they blacklisted Camp’s mobile phone.

He then experimented with town-cars (or “black cars”). These were reliable, clean and comfortable, with somewhat more civil drivers, though more expensive. However, as he told Stone, their biggest problem exacerbating their cost was filling dead-time between rides, which to a lesser degree also affected regular taxis. As McArdle wrote, “Drivers turn down [some long-distant fares since] they probably won’t get a return fare, and must instead burn time and gas while the meter’s off” which can wipe out the day’s profit.

Camp could now imagine much better ride-hailing experiences, but how to implement this vision? Could technology somehow balance supply and demand, and improve the whole experience for riders and drivers? At that moment (as he later told Stone) Camp recalled a futuristic James Bond scene he loved and often re-watched, from the 2006 Casino Royale. While driving, Agent  007’s phone shows an icon of his car, on a map, approaching   The Ocean Club, his destination. Camp immediately recognized that such a capability could bring his ride-hailing vision to life.

When iPhone was launched in 2007 it not only included Google Maps, but Camp knew it also had an “accelerometer” which let users know if their car was moving. Thus, the phone could function like a taxi meter, charging for time and distance. And in Summer of 2008, Apple had also just introduced the app store.

Camp knew this meant he could invent a “ride-hailing app” that would deliver benefits––positive experiences. With it, riders and drivers would digitally book a ride, track and see estimated arrival, optimize routes, make payments, and even rate each other. The app would also use driver and rider data to match supply and demand. This match would be refined by dynamic (“surge”) pricing, adjusting in real time as demand fluctuates. Peak demand would be a “tradeoff”: higher price but, finally, reliable supply of rides.

At this point, Camp grew more excited, sensing how big the concept might be, and pressed his friend Travis Kalanick to become CEO (Camp would refocus on StumbleUpon). Despite Kalanick’s famous controversies (ultimately leading to his stepping down as CEO) many of his decisions steered Uber well. Camp’s vision still included fleets of town-cars, which Kalanick saw as unnecessary cost and complexity. Drivers should use their own cars, while Uber would manage the driver-rider interface, not own assets. Analyzing the data, Kalanick also discovered the power of lower pricing: “If Uber is lower-priced, then more people will want it…and can afford it [so] you have more cars on the road… your pickup times are lower, your reliability is better. The lower-cost product ends up being more luxurious than the high-end one.”

With these adjustments, Uber’s strategy was fully developed. Camp and later Kalanick had “become the customer,” exploring and reinventing Uber’s “value propositions” ––the ride-hailing experiences, including benefits, costs and tradeoffs, that Uber would deliver to riders and drivers. These value propositions were emerging as radically different from and clearly superior to the status quo. It was time to execute, which required big data, first to enable the app’s functionalities. Kalanick also saw that Uber must expand rapidly, to beat imitators into new markets; analytics helped identify the characteristics of likely drivers and riders, and cities where Uber success was likely. Uber became a “big data company,” with analytics central to its operations. It is still not profitable today, and faces regulatory threats; so, its future success may increasingly depend on innovations enabled by data.  Nonetheless, for today at least, Uber is valued at $69B.

Yet, to emulate Uber’s success, remember that its winning strategy was discovered not by big data, but by becoming its users. Uber studied and creatively reinvented key experiences––thus, a radically new value proposition––and designed an optimal way to deliver them. Now let’s turn to our second, major new consumer business, frequently attributed to big data.
Airbnb’s launch was relatively whimsical and sudden. It was not the result of frustration with an existing industry, as with ride-hailing. Rather, the founders stumbled onto a new concept that was interesting, and appealing to them, but not yet ready to fly. Their limited early success may have helped them stay open to evolving their strategy.

They embarked on an extended journey to “become” their users, both hosts and guests; they would explore, deeply rethink, and reinvent Airbnb’s key customer-experiences and its value proposition. Big data would become important to Airbnb’s execution, but the evolutionary discovery of its strategy was driven by capturing deep insight into customer experiences. Providing and communicating its value proposition, Airbnb outpaced imitators and other competitors, and is valued today at over $30B. Like Uber, Airbnb is a great example of the approach and concepts we call “value delivery,” as discussed and defined in our overview, Developing Long-Term Growth Strategy.

In 2007, Joe Gebbia and Brian Cheskey, both twenty-seven and friends from Design school, shared a San Francisco apartment. They hoped for entrepreneurial opportunities, but needed cash when their rent suddenly increased. A four-day design conference was coming to town, and most hotels were booked. Gebbia suggested “turning our place into ‘designers bed and breakfast’–offering…place to crash…wireless internet…sleeping mat, and breakfast. Ha!” Next day, they threw together a web site, Surprisingly, they got three takers; all left happy, paying $80 per night (covering the rent). They all also felt rewarded by getting to hear each other’s stories; the guests even offered advice on the new business. Cheskey and Gebbia gave each other a look of, “Hmmm…” and a new business had been born.

The concept seemed compatible with the founders’ values; as The Telegraph later wrote, “Both wanted to be entrepreneurs, but [not] ‘create more stuff that ends up in landfill.’ …a website based on renting something that was already in existence was perfect…”

However, they first underestimated and may have misunderstood their concept. An early headline on the site read, “Finally, an alternative to expensive hotels.” Brian Chesky says:

We thought, surely you would never stay in a home because you wanted to…only because it was cheaper. But that was such a wrong assumption. People love homes. That’s why they live in them. If we wanted to live in hotels, more homes would be designed like hotels.

They were soon joined by a third friend, engineer Nathan Blecharczyk, who says that this mix of design and engineering perspectives [view 01:19-01:40 in interview]:

…was pretty unusual and I actually attribute a lot of our success to that combination. We see things differently. Sometimes it takes a while to reconcile those different perspectives but we find when we take the time to do that we can come up with a superior solution, one that takes into account both points of view.

Soon after the initial launch, they used the site frequently, staying with hosts, and gathering insights into experiences. Two key early discoveries were: 1) Payments, then settled in cash from guest to host, created awkward “So, where’s my money?” moments, and should be handled instead with credit card, through the Airbnb site; and 2) while they originally assumed a focus on events, when hotels are over-booked and expensive, users surprised them by asking about other travel, so they realized that they had landed in the global travel business. A 2008 headline read, “Stay with a local when traveling.”

In August of 2008, the team thought they had made a breakthrough. Obama would accept the Democratic nomination before 100,000 people in Denver, which only had 30,000 hotel rooms. So, Airbnb timed its second launch for the convention. Sure enough, they saw a huge booking spike…but it promptly dropped back to near-zero, days later.

Searching for a promotional gift for hosts, they pulled off a scrappy, startup stunt. They designed and hand-assembled 500 boxes of cereal, with covers they convinced a printer to supply for a share of sales: Obama Os (“Hope in every bowl”) and Cap’n McCain’s (“Maverick in each bite”). CNN ran a story on it, helping sell some at $40 per box, for over $30K total––enough to survive a few more months.

But mid-2009, they were still stalled, and about to give up. About fifteen investors all ignored Airbnb, or saw nothing in it. Then Paul Graham, a founder of “Y Combinator” (YC, a highly-regarded, exclusive start-up-accelerator), granted Airbnb his standard five-minute interview, which he spent telling them to find a better idea (“Does anyone actually stay in one of these? …What’s wrong with them?”) But on the way out the door, thinking all was lost anyway, Chesky handed a box of Obama Os to Graham, who asked, “What’s this?” When told, Graham loved this story of scrappy, resourceful unwillingness to die. If they could sell this cereal, maybe they can get people to rent their homes to strangers, too.

Joining YC led to some modest investments, keeping Airbnb alive. Still, weekly revenues were stuck at $200. Paul pushed them to analyze all their then-forty listings in their then-best market, New York. Poring over them, Gebbia says, they made a key discovery:

We noticed a pattern…similarity between all these 40 listings…the photos sucked…People were using their camera phones [of poor quality in 2009] or…their images from classified sites. It actually wasn’t a surprise that people weren’t booking rooms because you couldn’t even really see what it is that you were paying for.

Paul urged them to go to New York immediately, spend lots of time with the hosts, and upgrade all the amateur photography. They hesitated, fearing pro photography was too costly to “scale” (i.e., to use large-scale). Paul told them to ignore that (“do things that don’t scale”). They took that as license to simply discover what a great experience would be for hosts, and only worry later about scale economics. A week later, results of the test came in, showing far better response, doubling total weekly revenue to $400. They got it.

The team reached all 40 New York hosts, selling them on the (free) photos, but also building relationships. As Nathan Blecharczyk explains [view 18:50-21:27 in his talk], they could follow up with suggestions that they could not have made before, e.g., enhancements to wording in listings or, with overpriced listings, “start lower, then increase if you get overbooked.”

Of course, hi-res photography is common on websites today, even craigslist, and seems obvious now, as perhaps it is to think carefully about wording and pricing in listings. However, these changes made a crucial difference in delivering Airbnb’s value proposition, especially in helping hosts romance and successfully rent their home. This time-consuming effort greatly increased success for these hosts. After that, “people all over the world started booking these places in NY.” The word spread; they had set a high standard, and many other hosts successfully emulated this model.

To even more deeply understand user experiences, the team used “becoming the customer,” or what Gebbia calls, “being the patient,” shaped by his design-thinking background.

[As students, when] working on a medical device we would go out [and] talk with…users of that product, doctors, nurses, patients and then we would have that epiphany moment where we would lay down in the bed in the hospital. We’d have the device applied to us…[we’d] sit there and feel exactly what it felt like to be the patient…that moment where you start to go aha, that’s really uncomfortable. There’s probably a better way to do this.

As Gebbia explained, “being the patient” is still an integral piece of Airbnb’s culture:

Everybody takes a trip in their first or second week [to visit customers, document and] share back to the entire company. It’s incredibly important that everyone in the company knows that we believe in this so much…

They gradually discovered that hosts were willing to rent larger spaces, from air beds, to rooms, entire apartments, and houses. They also further expanded Airbnb’s role, such as hosting reviews and providing a platform for host/guest communications.“Becoming the customer,” they discovered a “personal” dimension of the experience; in 2011, Gebbia recounted [view 19:41-20:36] being a guest in “an awesome West Village apartment,” owned by a woman named Katherine (away then), and he was greeted with:

…a very personalized welcome letter…a Metro Card for me, and [menus from] Katherine’s favorite take-out places…I just felt instantly like, ‘I live here!’ [And on the street] felt like I have an apartment here, I’m like a New Yorker! So, it’s this social connection…to the person and their spaces; it’s about real places and real people in Airbnb. This is what we never anticipated but this has been the special sauce behind Airbnb.

This idea of personal connection may have helped address Airbnb’s crucial problem of trust (“who is this host, or guest?”). Again, they thought deeply about the problem, both redesigning experiences and applying digital solutions. One was “Airbnb Social Connections,” launched in 2011. As TechCrunch wrote, a prospective guest can:

…hook up the service to your social graph via Facebook Connect. Click one button, opt-in, and [in] listings for cities around the world you’ll now see an avatar if a Facebook friend of yours is friends with the host or has reviewed the host. It’s absolutely brilliant.” [Cheskey said it also helps guests and hosts] “have something in common…and helps you find places to stay with mutual friends, people from your school or university…

To further build trust, users were required to “verify, meaning share their email…online and offline identify.” Hosts were asked “to include large photos of themselves on their profiles.” Hosts and guests were urged to communicate before each stay.

Next, the team searched for yet more dimensions of users’ positive experiences. As Leigh Gallagher wrote in Fortune, the team (in 2012) pondered, “Why does Airbnb exist? What’s it purpose?” The global head of community interviewed 480 employees, guests, and hosts, and they found that guests don’t want to be “tourists” but to engage with people and culture, to be “insiders.” The idea of “belonging” emerged, and a new Airbnb mission: “to make people around the world feel like they could ‘belong anywhere.’” Cheskey explains, “cities used to be villages. But…that personal feeling was replaced by ‘mass-produced and impersonal travel experiences,’ and along the way, ‘people stopped trusting each other.’”

They adopted the “Bélo” (as at right in the expanded icon)
to echo this idea. Some mocked all this. TechCrunch
declared it a “hippy-dippy concept”; others suggested that users just wanted a “cheap and cool place to stay,” not “warm and fuzzy” feelings. But Gallagher argues that “belonging” can be more substantive than, “having tea and cookies with [your host]”:

It was much broader: It meant venturing into neighborhoods that you might not otherwise be able to see, staying in places you wouldn’t normally be able to, bunking in someone else’s space, [experience] “hosted” for you, regardless of whether you ever laid eyes on him or her.

“Belonging” may have been a little warm and fuzzy, or even hokey, but Gallagher cites evidence that the idea seemed to resonate with many users. In late 2012, wanting to build further on this notion and having read an article in Cornell Hospitality Quarterly, Cheskey began thinking that Airbnb should focus more on “hospitality.” He read a book by Chip Conley, founder of a successful boutique-hotel chain. Conley wanted guests to check out, after three days, as a “better version of themselves”; he talked of democratizing hospitality, which had become “corporatized.” Airbnb hired him.

Conley gave talks to Airbnb hosts and employees worldwide, and established a “centralized hospitality-education effort, created a set of standards, and started a blog, a newsletter, and an online community center where hosts could learn and share best practices.” He also started a mentoring program in which experienced hosts could teach new ones about good hospitality. Examples of the new standards included:

  • Before accepting guests, try to make sure their idea for their trip matches your “hosting style”; [e.g., if they want] a hands-on host and you’re private, it may not be the best match.
  • Communicate often; provide detailed directions. Establish any “house rules” clearly.
  • …beyond basics? [placing] fresh flowers or providing a treat upon check-in, like a glass of wine or a welcome basket. Do these things…even if you’re not present during the stay.

Airbnb took longer than Uber to discover their strategy, but they got there, again by climbing into the skin of users, “becoming the customer” (or “being” the patient”), living their users’ experiences. Like Uber, they then used big data extensively, to help execute. For example, building on the early New York experiments to help hosts set optimal prices, Airbnb used big data to create, as Forbes wrote, “Pricing Tips.” This “constantly updating guide tells hosts, for each day of the year, how likely it is for them to get a booking at the price they’ve currently chosen.” Airbnb’s machine-learning package further helps hosts quantitatively understand factors affecting pricing in their market.

In combination with the above initiatives to strengthen trust, personal connection, and even that sense of “belonging anywhere,” big data helped Airbnb continue to improve the value proposition it delivered. Its great strategic insights into user experiences, and its superb execution, allowed Airbnb to outdistance its imitative competitors.

So, both Uber and Airbnb made great use of big data. Yet, for both these game-changing businesses, to “become the customer” (or “being the patient”) was key to discovering the insights underlying their brilliant, breakthrough-growth strategies.

*   *   *

This post follows our first, Who Needs Value Propositions, in this data/strategy series.  Our next post (#3–– Big Data as a Compass––Industrial/B2B Businesses) looks at the industrial examples mentioned earlier (GE and Caterpillar), to compare their lessons with those in the above consumer cases. We then plan three more posts in this series:

  • #4: Dearth of Growth––Why Most Businesses Must Get Better at Discovering & Developing New Growth Strategies
  • #5: No Need to Know Why? Historical Claims (“Correlation is all we need, Causality is obsolete”) Propagated Misplaced Faith in Big Data as fount of new growth strategies
  • #6: Powerful for execution, big data is also prone to chronic, dangerous errors

Who Needs Value Propositions When We Have Big Data?

Asking too much from Big Data Analytics, Digital Marketers Confuse Growth Strategy with Marketing Efficiency and Short-Term Clicks

Posted June 14, 2017

In the past decade, “big data analytics” (or “big data”) has grown dramatically in business. Big data can help develop and execute strategy—as in Google and Facebook’s ad businesses, and Amazon and Netflix’s recommendation engines. Seeing these huge tech successes, many decided to just emulate how big data is used, hoping that big data analytics alone can drive development of long-term growth strategy. This belief is a delusion.

Winning strategies require that businesses discover new or improved experiences that could be most valued (though unarticulated) by customers, and redesign their businesses to profitably deliver these experiences. Big data can increase communication efficiency and short-term sales, or “clicks”, but changing the most crucial customer experiences can transform behaviors, attitudes, and loyalty, leading to major growth. Such insight is best found in many businesses by in-depth exploration and analysis of individual customers—and cannot be found in the correlations of big data. Some questions are easiest answered with big data, but availability of data should not drive what questions to ask. Data-driven priorities can obscure fundamental strategic questions, e.g. what could customers gain by doing business with us—what value proposition should we deliver, and how?

Discovering such insights requires deeply understanding customers’ usage of relevant products or services. In some businesses, such as online retailers, customers’ buying-experiences constitute usage of the service, so these businesses do have usage data, and can use big data in developing strategy. For most, such as product producers, however, usage happens only after purchase, so they have purchase but not usage data, and cannot properly use big data to develop strategy. Feeling compelled to use big data, such businesses may use it anyway, on the data they have, which can help achieve short-term sales, but not to develop long-term growth strategy. However, these businesses still can— and must—develop insights into what usage experiences to focus on changing, and how.

Digital marketing now plays a major role in developing business strategy, and heavily uses big data. Big data predictive algorithms analyze customers’ past transactions and purchase or shopping behaviors, to increase the efficiency of matching customers with marketing offers, and strengthen short-term sales. Sustained major growth requires more than ratcheting reach-efficiency and tweaking the week-end promotional tally. Sustained growth requires creative exploration of customers’ current experiences, to discover breakthrough value propositions, and design ways to profitably provide and communicate them. This post and follow-ups discuss these concerns and suggest solutions.

Predicting transactions is not strategy

As illustration, a Customer Experience Management (CEM) system by Sitecore helps fictional apparel maker “Azure” (Sitecore’s name) use big data to customize marketing to individual customers. Here, Azure intervenes with consumer “Kim” on her decision journey. When she visits their site anonymously, the data shows her matching their active-mother profile. Clicking on a shoes ad, she signs up for email alerts, providing her name and email. Azure begins building her profile. They email a belts promotion to customers predicted by the data as potentially interested—Kim buys one. Later, real-time location data shows Kim near an Azure store, so CEM texts an in-store discount on a new boots line; Azure is confident she’ll like it based on her past actions. Scanning the coupon on Kim’s phone, the CEM enables the clerk to offer Kim another product, a child’s backpack, based on Kim’s profile. Kim is impressed—Azure understands her interests, tracking her every action. She joins Azure’s loyalty program, giving her sneak peeks at coming products. With data showing that Kim most often accesses the site by smart phone, Azure offers her their new mobile app. Via big data, Azure has improved the shopping and buying experiences, and efficiently stimulated short-term sales.

In applications of big data for marketing and growth-strategy, data scientists search for previously unknown correlations among customer transactional and behavioral data. For growth strategy, however, more understanding and creative thought is needed about why customers do what they do, what the consequential experiences have been, what is imperfect in these experiences, and how the business might cause these new or different experiences. These are typically unarticulated opportunities for improved customer experiences. Identifying them requires skilled observation and creative interpretation of current experiences—not replicable in most businesses by data collection and analytics. Such analysis, including customers’ product-usage behaviors, not just purchase, is crucial to developing value propositions that can generate major new growth.

Urging us to “Use big data to create value for customers, not just target them,” Niraj Dawar said in HBR that big data holds out big promises for marketers, including answers to “who buys what, when?” Marketers “trained their big data telescopes at a single point: predicting each customer’s next transaction,” in detailed portraits of consumers’ media preferences, shopping habits, and interests, revealing her next move.

In the Azure narrative, Azure is “pretty confident” of what Kim will want, where and when, based on understanding her interests and interactions. In addition to targeting, big data allows “personalizing”—using our knowledge and references to customers’ past purchases and interests, to make our marketing more relevant and thus more effective in winning that next short-term promotional sale. This saga, of Kim’s “well-guided shopping journey” with Azure, leaves Kim happy (though not entirely of her own free will). In this way, it is reminiscent of Minority Report’s mall scene. The novel and 2002 film focused on prediction (“precognition”) of crimes not yet committed (supernaturally foreseen by “PreCogs”). We can hope this premonition is only a dystopic nightmare, but marketers may find the film’s futuristic marketing a utopian dream. The marketing is precisely targeted and highly personalized—ads and holographic greeter automatically recognize and call out the character’s name, reminding him of a recent purchase.

Fifteen years ago, the sci-fi film’s marketing technology was showing us the future—ever increasingly accurate predictions of each customer’s next purchase. Big data is thus a kind of commercial precognition. Data scientists are PreCogs using big data, not supernatural powers. Both narratives are fictional, but illustrate the big data logic for marketing and growth-strategy. Able to predict the customer’s next transaction, the CEM produces targeted marketing, more efficient in customer-reach. Personalized marketing is more relevant, helping it stimulate short-term sales. A fundamental problem with this paradigm is that growth strategy needs more than accurate predictions of transactions. Such strategy must transform behaviors, attitudes and loyalty of customers and other players in the chain, based on insights about the causality underlying correlations.

Summary: Strategy is More than Prediction

Marketers are right to have yearned for much more factual data on what customers do, i.e. their behaviors. However, with big data it has been easy and commonplace to overemphasize customers’ behavior, especially as related to their buying process, without adequately understanding and analyzing the rest of their relevant experience. Businesses must understand customers’ usage experience, not just buying. They must also explore what’s imperfect about this experience, how it could be improved for the customer, what value proposition the business should deliver to them, and how. Such exploration must discover the most powerful, unarticulated customer-opportunities for greater value delivery, and redesign the business to profitably realize such opportunities. These traits are essential to how strategy is different from prediction—strategy must focus on what we want to make happen and how, not just what we might bet will happen.

Kim’s past transactional behavior is analyzed to predict what she’ll likely want next, but needs to be pushed further, to discover experiences and value propositions that could influence her, and yield long-term growth. (See a similar complaint about limitations of data, from Clayton Christensen et al.) Actions—including product and service improvements, and intense focus of marketing communications on customer benefits—must then be designed to optimally deliver these value propositions.

Growth of big data in tandem with digital marketing

IDC estimates global revenue for business data analytics will exceed $200B by 2020. As a recent review said, this expansion was enabled by several trends: continued rapid expansion of data, doubling every three years, from online digital platforms, mobile devices, and wireless sensors; huge capacity increases and cost reductions in data storage; and major advances in analytic capabilities including computing power and the evolution of sophisticated algorithms. Previously, “industry poured billions into factories and equipment; now the new leaders invest…in digital platforms, data, and analytical talent.” This investment expands the ability to predict the next short-term transaction, increase marketing-communications efficiency and promotional impact. It also drains resources needed for the more difficult but, as argued here, more strategically crucial exploration of customers’ usage experiences, and discovery of breakthrough-growth value propositions.

Using digital technology to market products and services, the digital marketing function has risen rapidly. Last year for the first time, US digital ad-spending surpassed TV, the traditional dominant giant. And digital marketing, both the source of most big data and the easiest place to apply it, increasingly leads development of business strategy.

Efficiency and relevance: important but usually much less so than effectiveness

More efficient marketing is desirable, but only if it’s effective, which is often taken for granted in the focus on efficiency. Much digital marketing faith is put in the four-part promise of “the right ad or offer, to the right person, at the right time, via the right place” (see here, and here). Most big data focus on the last three, which mostly enhance efficiency, instead of the “right ad” which determines effectiveness.

Hunger for efficiency also drives the focus on targeting. Personalizing, when possible and affordable, can also make customers more willing to hear the message, increasing efficiency—and possibly effectiveness—by its greater relevance.

However, effectiveness is the far more crucial issue. If a message does not credibly persuade customers, it is still of little use to the business, even if “efficient.” But targeting and personalizing marketing typically do not identify what behavioral attitudes to change, or how to change them. This more fundamental strategic goal requires deeper understanding of the unarticulated value to customers of improved experiences, and detailed creative exploration of the business’ potential to profitably cause these improvements.

Reinforcing the predominant near-term and efficiency focus of big data in digital marketing is the nature of online sources typically available for big data. McKinsey estimated that, “so much data comes from short-term behavior, such as signing up for brand-related news and promotions on a smartphone or buying a product on sale. That short-term effect typically comprises 10 to 20 percent of total sales, while the brand…accounts for the rest.” This short-term nature of the readily available data biases marketers to focus on short-term transactional results.

Location-based, real-time big data—another advance in short-term promotion

It seems worth noting here that location-based marketing excites the digital marketing world, seeing the “next big thing.” Below are examples, from Skyy Vodka and Starbucks:







As location data gets more accurate (problematic today) this approach will again improve promotional efficiency. In one illustrative test recounted in Ad Age, Brown-Forman, suppliers of Herradura tequila, teamed with Foursquare (a search-and-discovery mobile app that helps find “perfect places [food, entertainment, etc.]”). Foresquare used Brown-Forman’s list of accounts where Herradura is sold, to target mobile and other Herradura ads to consumers whose mobile was close (or had been in) shops, bars, or restaurants in the account list. They saw 23% increased visits to accounts, a positive signal.

Since big data was applied early by direct marketing companies, big data today (further illustrated above by advances in location-based marketing) works more like direct-response marketing than demand-generation. The problem, as noted earlier, is that businesses more than ever also need the latter—demand-generating activity, creating loyalty, thus behavioral changes resulting in long-term growth. Some businesses don’t need these luxuries, when cheap, automated big-data options—digital PreCogs—proliferate.

But most businesses do need to make these serious strategic investments, in addition to and complementary with big data analytics. Having digitally captured countless petabytes of data describing Kim’s every action of shopping and buying, the business managers now need to spend time with Kim learning about her usage of that apparel. What were her experiences before and during usage of those shoes, the belt, and other items? And what of her daughter’s experiences with the backpack? What was imperfect, what could some better experiences be, what would be an improved superior value proposition, and what would it take to provide and communicate that proposition effectively and profitably? These intensively customer-focused actions can enable the discovery and activation of powerful insights for profitably influencing customers’ (and others’) behavior, a key basis for generating profitable major growth over time.

*   *   *

As mentioned above, this blog series will expand on these concerns about the way that big data analytics has evolved for use in growth strategy, including digital marketing; and will expand on the above recommended solutions for marketers and businesses, including how these solutions apply to most businesses.

Value Delivery Blog


 4 Ways to Improve Delivering Profitable Value

Posted 4/13/15

Make Value Propositions the Customer-Focused Linchpin to Business Strategy

We suggest that Businesses should be understood and managed as integrated systems, focused single-mindedly on one thing – profitably delivering superior Value Propositions, what we call delivering profitable value (DPV). But most are not. Some readers may assume this discussion can only be a rehash of the obvious – surely everyone ‘knows’ what Value Propositions (VPs) are, and why they matter. But we suggest that most applications of the VP concept actually reflect fundamentally poor understanding of it, and fail to get the most out of it for developing winning strategies. In this post I’ll summarize 4 ways to improve on delivering profitable value, using the VP concept far more effectively – as the customer-focused linchpin to your strategy.

Delivering Profitable Value – Let’s first recap the key components of this approach:

Real & Complete Value Proposition – A key element of strategy; internal doc (not given to customers); as quantified as possible; makes 5 choices (discussed in depth here):

  • Target customers (or other entities) for this VP?
  • Relevant timeframe in which we will deliver this VP?
  • What we want customers to do (e.g. buy/use and/or other behaviors/changes?)
  • Their competing alternatives (competitors, status quo, new technologies, etc.)?
  • Resulting experiences they will get & we deliver? Not a list of products & performance attributes, but the core of a VP; discussed here and further here, they are:
    • Specific, measurable events/processes – outcomes – in customer’s life/business, that result from doing as we propose (e.g., buy/use products/services, etc.)
    • Includes price and tradeoffs (inferior or equal experiences)
    • All as compared to competing alternatives

Deliver the chosen VP – A real VP identifies what experiences to deliver, not how; so manage each business as a Value Delivery System with 3 integrated high-level functions:

  • Choose the VP (discover/articulate a superior VP focused on resulting experiences)
  • Provide it (enable the VP/experiences to happen via product/service/attributes, etc.)
  • Communicate it (ensure customers understand/believe it via Marketing, Sales, etc.)

Profitable Value? – If customers conclude that a VP is superior to the alternatives, it generates revenues; if the cost of delivering it is less than those revenues, then the business creates profit (or shareholder wealth) – thus, it is delivering profitable value.

*  *  *  *  *

4 areas where many businesses can improve on delivering profitable value:

  1. Avoid misunderstood, confused, and trivial definitions of ‘Value Proposition’
  2. Deliberately deliver the VP – rigorously define, link & manage each function to help Provide and/or Communicate the resulting experiences
  3. Think profitable value-delivery across the entire chain, not just the next link
  4. Discover new value-delivery insights by primarily exploring and analyzing what customers actually do, more than what they think and say they want

Now let’s consider each of these 4 areas in more detail:

  1. Avoid commonly misunderstood, confused or even trivialized definitions of a VP

The table below summarizes some common misperceptions about Value Propositions, followed by some discussion of the first two.


Of these misperceptions, the first two are perhaps most fundamental, being either:

  • Our message – an external document to directly communicate with customers, explaining why they should buy our offering; part of execution, not strategy


  • Part of strategy, but focused primarily on us (not customers) – our products/services, performance-attributes, functional skills, qualities, etc., not customers’ experiences

It’s not your Elevator Speech – a VP is strategy, not execution – For much greater strategic clarity and cross-functional alignment, avoid the common misunderstanding that confuses and equates a VP with messaging. Execution, including messaging is of course important. A VP, as part of your strategy, should obviously drive execution, including messaging; but strategy and execution are not the same thing. If you only articulate a message, without the guiding framework of a strategy, you may get a (partial) execution – the communication element – of some unidentified strategy. But you forgot to develop and articulate the strategy! The execution might still work, but based more on luck than an insightful, fact-based understanding of customers and your market.

This common reduction in the meaning of a VP, to just messaging, not only confuses execution with strategy, but also only addresses one of the two fundamental elements of execution. That is, a VP must be not only Communicated, but also Provided – made to actually happen, such as via your products/services, etc. If customers buy into your messaging – the communication of your VP – but your business does not actually Provide that VP, customers might notice (at least eventually). Though some businesses actually attempt this approach – promising great things, but not making good – and may even get away with it for a limited time, a sustainable business obviously requires not only promising (Communicating) but actually Providing what’s promised.

And it’s not about us – focus VPs on customers, not our products, services, etc. – The other common misuse of the VP concept starts by treating it (rightly) as a strategic decision and internal document. But then (again missing the point) such a so-called VP is focused primarily on us, our internal functions and assets, rather than on the customer and resulting experiences we will deliver to them.

Here it’s helpful to recall the aphorism quoted by the great Marketing professor Ted Levitt, that people “don’t want quarter-inch drill bits, they want quarter-inch holes.” A real VP is focused on detailed description of the ‘hole’ – the resulting experiences due to using the drill bit – not on description of the drill bit. Of course, the drill bit is very important to delivering the VP, since the customer must use the drill bit, which must have the optimal features and performance attributes, to get the desired quarter-inch hole. But first, define and characterize the VP, in adequately detailed, measurable terms; then separately determine the drill-bit characteristics that will enable the desired hole.

  1. Deliberately deliver the VP – rigorously define, link and manage what each function must do to help Provide and/or Communicate the resulting experiences

It’s impossible to link Providing and Communicating value without a defined VP. However, even with a chosen VP, it is vital to explicitly link its resulting experiences, to the requirements for Providing it (e.g., product and service) and for Communicating it. Companies can improve market share and profitability by rigorously defining the VP(s) they aspire to deliver and then rigorously linking to the Providing and Communicating processes. Failure to make the right links leads to a good idea, not well implemented. (See more discussion of this Value Delivery framework here.)

 VDS (w)

  1. Think value-delivery across the entire chain, not just the next link

Most businesses are in a value delivery chain – simple, or long and complex, e.g.:

Many companies survey their immediate, direct customer, asking them what they most value. Less often, they may ask what that customer thinks is wanted by players down the chain. They may rely for insight on that next customer, who may or may not be knowledgeable about others in the chain. There is often great value in exploring entities further down, to understand how each link in the chain can provide value to other links, including final customers, often with important implications for the manufacturer, among others. (See more discussion of Value Delivery Chains here.)


  1. For Value Proposition insights, explore and analyze what customers actually do, not what they think they want

Businesses often conduct research, essentially asking customers, in various forms, to specify needs. A limitation of such research is that customers often filter their answers by what they believe are the supplier’s capabilities. We believe a better way is to deeply study what entities (at several links in the chain) actually do. First capture a virtual “Video One” documenting current experiences, including how an entity uses relevant products/services. Then create “Video Two,” an improved scenario enabled by potential changes by the business in benefits provided and/or price, but which somehow deliver more value to the entity than in Video One. Then construct a third virtual video capturing competing alternatives. Finally, extract a new, superior VP implied by this exploration. Results come much closer to a winning VP than asking customers what they want. (See here for more discussion of creatively exploring the market using this methodology.)


Shaping Your View of B2B Value Props

Posted 3/31/15

In his recent interview of Michael Lanning (shared in the previous post of this Blog), Brian Carroll asked Mike about the role of Value Props in B2B strategy; Brian wrote about this conversation in the B2B Lead Blog.

Real Value Props – Direct from the Source

Posted 3/31/15

Recently Michael Lanning was interviewed by Brian Carroll, Executive Director, Revenue Optimization, MECLABS Institute, for the MECLABS MarketingExperiments Blog. Brian asked Mike, as the original creator of the Value Proposition concept in 1983 while a consultant for McKinsey & Company, about the evolution of this concept over the past three decades. Their discussion, on Real Value Props, Direct from the Source,’ is here.

Welcome to: Value Delivery Blog

Updated 3/30/21

This Blog aims to help illuminate what makes for winning and losing business strategies, discussing what we hope are interesting and relevant examples. In this Blog we will especially view these topics through the lens that we––the DPV Group, LLC––call ‘Delivering Profitable Value (DPV)’ which contends that strategy should focus single-mindedly on profitable, superior ‘Value Delivery.’ That is, sustainable business success, while elusive and subject to good and ill fortune, can most realistically be achieved by a concentrated, deliberate effort to creatively discover and then profitably Deliver (i.e. Provide and Communicate) one or more superior Value Propositions to customers.

While many would perhaps say this idea is ‘obvious,’ most businesses do not actually develop or execute their strategies based on this understanding of the central role of the Value Proposition. A Value Proposition should not be a slogan or marketing device for positioning products/services, but rather the driving, central choice of strategy – fundamental to success or failure of the business. In this Blog we will try to bring that idea further to life for a variety of industries and markets.

As background, Michael Lanning first created and articulated/coined some of these concepts, including ‘Value Proposition,’ the ‘Value Delivery System’ and the related concept of ‘Value Delivery.’ He did so (after his first career, in Brand Management for Procter & Gamble) while a consultant for McKinsey & Company, working closely with Partner Ed Michaels in Atlanta in the early 1980s.

In life after McKinsey, he and his colleagues in the DPV Group built on those seminal concepts. Lanning did so first with then-professor at Stanford and Berkeley Business Schools, Prof Lynn Phillips, and later with his DPV Group colleagues. These include long-time senior P&G-manager Helmut Meixner, and previous McKinsey consultant and long-term executive in the paper industry Jim Tyrone, and others. The expanded concepts of Value Delivery include the Customer’s Resulting Experience and the notion of organizations that, rather than Value-Delivery-Focused, can be understood as ‘Internally-Driven’ and/or ‘Customer-Compelled.’

Today, the DPV Group tries to help clients appreciate, internalize, and apply these Value Delivery strategy concepts, to pursue profitable, breakthrough growth in a wide range of B2B and B2C businesses. We hope that this Blog will also contribute to that goal.

Copyright © 2021 DPV Group | Contact: Email us at, or phone us at (770) 390-0114.