Read The Story of the ThinkPad: Building a Global Success Page 3

The ThinkPad 700C, a notebook PC with a TFT color liquid crystal display

  and the first model to use the ThinkPad name

  I don’t know about the company’s view on this, but what I wanted to achieve was an amalgamation of the various technologies we had. The Yamato lab already had developed various technology groups by that point, including one with expertise in LCDs (liquid crystal displays). Nowadays, color TFT displays are commonplace, but at the time, a product featuring a color LCD and active matrix technology was an object of wonder. The Yamato lab possessed all the technologies needed to make such a product possible. Compact hard disks (HD) were also made at the Yamato lab. As was the TrackPoint, which was developed by a fellow at IBM’s Almaden lab in the U.S., which was incorporated as a new pointing device.

  I insisted that we should have swappable IBM-made hard disks and a 10.4-inch color TFT display on an A4-sized PC, and that we should integrate all these technologies. Making a notebook PC required shrinking the motherboard as well; luckily, IBM’s Yasu plant, which was the center for board design and production, had this technology. LCD production was also done at the Yasu plant.

  Looking back, I feel that the groundwork had already been laid for developing a compact portable PC in Japan, and in particular at the Yamato lab. The fact that Japan had many excellent battery manufacturers also worked in our favor.

  To tell the truth, I had originally envisioned the ThinkPad’s design as being white. Development work had already progressed quite a bit with a white case when Richard Sapper, who worked as an IBM design consultant, and the corporate ID team came to me and announced that, “the color will be black.” A simple change in color would have been fine, but I was also asked to change the design to its current angular form, inspired by the traditional square Shokado lunchbox, which had been proposed by Kazuhiko Yamazaki of the Yamato lab. Though we belonged to the same organization, the design team was independent and under a different division, and I was not well informed in advance about that decision.

  “What, at this stage?” I asked, in shock.

  This was how I honestly felt at the time. However, I had no choice but to comply with the decision, which meant that changes in the design became a necessity. This was how the current design of the ThinkPad, clad in matte black with a red TrackPoint as an accent, came to be.

  Indeed, the ThinkPad name itself was not decided until just prior to the sales launch. As is commonly known, the origin of the name was the motto “Think!” which had been introduced as an IBM slogan by chairman and CEO Thomas J. Watson, Sr. IBM employees used to walk around with notepads with “Think” written on their cover, from which the name of the ThinkPad was born. The person who decided the ThinkPad name was Bruce Claflin, then General Manager of the PC division.

  The logo mark was also selected later. It is now different, but the original design, which featured the words IBM and ThinkPad engraved at an angle, was symbolic. The logo’s slanted orientation was unique to the ThinkPad among IBM’s products. The IBM logo features three colors—the red, green and blue of the RGB color space—to commemorate the color TFT liquid crystal display. For versions featuring a monochromatic LCD, it was decided that only the color blue would be used for the logo.

  The problem-tracking system and the headaches of the development manager

   

  There were many hardships, but I personally tend to forget such things. Instead, I recall being happy when, each time we proposed a new ThinkPad, we would get positive feedback from users. I was honestly moved each time. Of course, many hurdles had to be cleared each time before we could reach that point. The organization itself being young and the product category new, we lacked any pre-existing standards, and so had to set new specifications and design accordingly each time. Problems are bound to occur in such situations, and I sometimes felt as if I was drowning under the number of difficulties we were faced with in the beginning.

  Though different manufacturers probably have different names for it, we had what we called a problem-tracking system for managing these challenges. Someone may find a problem and write it down as a memo in a notebook, but that memo is likely to get lost somewhere and thus fail to receive timely scrutiny. So when a problem occurs, it is important to systematically assign that problem a number and release information describing what happened. The person in the development team responsible for initially receiving such public memos then determines who should be assigned to look into it. This is a crucial skill that can make all the difference to a development project.

  Say a problem is thought to be related to the BIOS (Basic Input/Output System), which is firmware that controls peripheral devices. Once the issue is determined to be BIOS-related, ownership of the problem, as it’s called, passes to the BIOS manager. When the BIOS manager reports, “The problem indeed had to do with the BIOS, and it has been fixed,” the problem is put in the “Verify” category, indicating that the problem is now verifiable. The case file then goes back to the person who originally discovered the problem. That person checks the file, and if the problem is confirmed to have been solved, the case is closed. In other words, when a problem is discovered, it is systematically tracked through the Open→Working→Verify→Close steps to completion.

  However, the nature of a problem cannot always be identified. For example, after studying the problem, the BIOS manager may report back, “This is not a BIOS problem. I think that the problem may lie in the electronic circuits.” Such cases are labeled “transfer,” as the owner of the problem then switched from the BIOS manager to the person in charge of electronic circuits, who then re-examines it. Such problems can occur by the dozen every day. Before a product is completed, thousands of such problems will be found, every single one of which must be solved. This is the job of engineers.

  Naturally, development work must follow a schedule, as product development cannot be allowed to drag on forever. Thus it is necessary to specify by when the test period is to end if the schedule is to be met. The person who discovered the problem can therefore set a deadline for fixing it. Based on this, a schedule is drawn up that sets a deadline for verification and for when the case is to be closed. Such schedules are always over-optimistic, and so even though people have been told endlessly about the importance of keeping on schedule, the pressure mounts daily, with questions such as “Why are dozens of problems still cropping up at this stage?” and “Why is this still only in the working stage?”

  Based on personal experience, I believe that the most important skill of a development manager is the ability to put together, in his mind, how things will play out and determine whether one is running into disaster or still doing well. For example, and this happens often, when asking the persons in charge of some task how they are doing, the worst possible answer to get is, “We’re working on it.” I can tell just by looking that they’re doing that. What I want to hear is information on how many problems they are having now, on the number of problems they expect further down the line given the current rate, and whether they think they’ll be fine by a given date. It goes without saying that “We’re thinking about it” falls short as an answer.

  If someone tells me, “I’d like to have us figure this out in, say, three days.” I’ll ask him, “When you say ‘figure this out,’ what exactly do you mean?” To which he might reply, “Whether it’s an electrical problem or a software problem.” This is the type of step-by-step approach I use when confronting problems. You don’t need empty statements like, “Well, maybe it’ll be okay.”

  The failure of the development manager to have a solid grasp of the situation may ultimately rip a hole in the entire business. To avoid this, the development manager must put continuous pressure on the engineers, and repeat this until this process becomes a routine that is carried out over and over. I suppose this might well be called a form of hardship.

  Why the emphasis on robustness?

   

  I am very confident in the sturdiness of the ThinkPa
d. The path to the ThinkPad’s rugged design was marked by various dramatic episodes. In those days, a colored notebook PC carried a price tag of about one million yen; it was a very expensive piece of equipment. Whether it was destroyed during testing or broken by a customer, it would represent a loss of one million yen. Our engineers thought at first that such an expensive article should be handled delicately, and so no thought was actually given to making notebook PCs tough. But once sales started, we discovered that some people would manage to break them. The products being so expensive, brought serious consequences for the person responsible for the breakage. People would say “It broke,” rather than “I broke it.” Such is human psychology. This made our engineers change their mind: Making the product rugged would be better for customers as well as for us, as pricey parts breaking frequently during the product warranty period could cause us large losses.

  As notebook PCs prices declined with their increasing popularity, this trend became ever more pronounced. Everyone started carrying notebook PCs, greatly raising the probability of accidents through dropping, bumping, or the application of excessive loads. As the name suggests, portable PCs were developed so that they could be carried around on a daily basis, but there were still instances of use (carrying style) that far exceeded our expectations.

  A second important point is that the ThinkPad was originally a product intended for corporations, that is, for business use. The basic business model is to ship several tens of thousands units to a single company. Purchase orders as large as 100,000 units for major corporations are not unusual, meaning that even if the probability of an accident is low, some kind of accident or problem is expected almost daily even within a single company. The quality expectations between a consumer user who purchases one unit and a corporation that purchases 100,000 units differ considerably, even more so when the users are 100,000 different persons, meaning a variety of usage styles and breakage modes.

  Thus, following the launch of the ThinkPad, all kinds of negative data accumulated in our database. While each had a different priority level as determined by its urgency and importance, it was our responsibility to open all these cases and see them to a close. During that process, we learned that unless we understood the behavior of customers (users) and saw things from their viewpoint, by observing and learning in their environment to understand why specific problems occurred, we would not be able to thoroughly solve them. And thus as we took care of business, we achieved valuable knowledge.

  Even members of the development team need to learn from and explain things to users as they interact with them. So, although we were confident, we were also strongly aware that we are educated by our customers. I am loath to say this, but problems that don’t come up even when we monitor a hundred machines arranged side by side for one year sometimes readily appear on customers’ PCs. In the early stage, collecting user feedback was as efficient in practice as running final tests on 100,000 machines all at once. Of course, during development, we create machines capable of withstanding all kinds of assumed conditions, but even so, we had no choice but to set the criteria ourselves, as we had no precedents to go by. The resulting products are often used by end users in completely unexpected ways, often in a more aggressive manner than anticipated. Sometimes this comes as a refreshing surprise, and all such discoveries provide valuable lessons, which are full of potential for driving evolution and growth.

  If a given part breaks when subject to some form of use, we would determine that this part must have a set amount of strength, and request that specific degree of durability from the parts manufacturer. Over time, that level goes on to become an industry standard. This is the way things often play out.

  If you think of a given breakage as an accident, it will be negative data, but if you consider this as the discovery of a new usage method, it becomes positive input for driving evolution. The ways in which individual customers use their PCs are more innovative than we could imagine, because they make full use of their machines as business tools and come up with many new ideas in seeking ways to maximize their usefulness. For instance, we found that motherboards would often break at a certain university in the U.S, but were utterly at a loss as to why. So we sent over four or five engineers from Japan and had them live with the students for a week or so, during which time they were able to observe how the students used their PCs.

  What they saw was that the students would throw their PCs into backpacks already crammed with hardcover books and other belongings, and ride their bicycles on the campus and other locations. These days, this kind of scene can be seen also in Japan, but at the time, we could not have imagined it. Under such conditions, the machines are subjected to extremely high stress. Also, when a student gets off his bike to redo his shoelaces, he leans over. In that position, the items in his backpack get strongly compressed, further increasing the stress applied to the computer, sometimes causing the solder of the motherboard to come off. This was not something we had anticipated. Then, a few years later, such issues began arising at companies too, but as we had already begun to revise our quality standards, we were able to respond quickly. This is one example of how we implemented the Open→Working→Verify→Close process. In this way, a specific complaint and a single discovery may lead to the revision of both basic design standards and evaluation standards.

  Developing test equipment to simulate the stresses and tensions applied under such usage conditions was another important task for us. Before making products, we would make the tools for evaluating them as well. The type of tools to create and the type of test tools to provide are also extremely valuable know-how. So let me explain a bit about our current quality tests. Notebook PCs are machines that come into their own when they are carried about. No matter how high the performance of the CPU or how long the battery life, these features are meaningless unless the machine is durable enough to be carried around casually, and quiet enough to be used anywhere.

  To this end, we run a number of quality tests, many of which you could call torture tests. For example, robustness is tested through a few tests: the free-fall drop test - opening the display while the OS is running and dropping the ThinkPad unit from a height greater than that of a desk; the corner drop test, in which the ThinkPad unit is subjected to a free fall from each of its eight corners, so that each drop impact is concentrated in the opposite corner, after which the cover and internal elements are inspected for damage; and the tilt drop test, in which an operating ThinkPad is held from one side, lifted and dropped from a height of a few centimeters, to check for possible damage to the HDD. There is also a liquid spill test, during which a powered-on ThinkPad has a certain amount of liquid (milk, soda, etc.) spilled onto its keyboard, to check whether the liquid drains and avoids the precision circuits. Other tests include the dust resistance test, in which the ThinkPad unit is operated in a dusty environment and its operation and condition is checked; and the operating noise measurement test that measures noise emissions, such as the sounds of the cooling fan and HDD access, in a special acoustic chamber.

  Top Photo

  Special acoustic chamber where operating noise measurement tests

  are conducted to confirm quietness