bootix ::: NetPC ::: Network Computer ::: Explained
 

The NetPC and Network Computer Explained

This paper looks at the thin client debate from an IS manager's eyes. Networks often have mission-critical investments in PC hardware and software. The principal vendors are proposing change: either a new application infrastructure or an ever-tightening dependence on proprietary servers. Other companies have produced compromise products that steer a middle course. bootix (a technology supplier for each side and the middle) examines the possibilities.

The terms NetPC and Network Computer are different answers to the same set of problems. Together with current excitement about terms like network computing, thin clients, fat clients and Java machines it is sometimes difficult to find firm definitions. The Network Computer (NC) is an upstart idea that provoked the creation of the NetPC as a response, and the proponents of each are still waging a vigourous marketing battle. Both proposed devices aim to solve one of the biggest headaches faced by Information Systems managers today:

lowering the cost and raising the quality of computing power delivered at the desktop

The NetPC purists are trying to limit the debate to just this statement. If they can make current technology cheaper to run, they say, the NC will be irrelevant. Diehard NC proponents on the other hand want nothing less than a complete change in application platforms to support the appealing write-once, run-anywhere concept. While this debate has some merit and must be a consideration for sites building a brand new network from scratch, managers with installed PC networks see the cost of ownership of Windows workstations as the most significant issue.

There are half a dozen main technologies that can be described by either or both of these fashionable terms, and we look here at the benefits each can bring a large PC-based installation. It is by no means clear which model will prevail - if any - but network strategists need to understand the principles to plan for the future. Right now, when no NetPCs have been made (let alone sold!) and with Network Computers in their infancy it is still possible to plan ahead and protect investments in traditional PCs and PC applications.
The Distributed PC Networking Model

The years between 1987 and 1995 were marked by the rise and ubiquitous deployment of distributed desktop computing in business environments. Networking became more important through the decade. Local Area Networks, then Wide Area Networks and finally the Internet in the 90s linked desktops machines with departmental servers, each other, and the world. Significant business benefits were delivered by this model, and user expectations rose along with it. At the same time, hardware price/performance was dropping and software capability was increasing. It would seem logical that this would lead to lower corporate IT budgets or at least lower costs per workstation, however this was not the case. [1]

In the 1990s the client-server architectures became more common, so that the desktop PC became a vital component of corporate IT strategies. This gave greater performance to the end user due to the dedicated processor power at the server, and management advantages by moving data off the desktops. A bonus was that usually graphical interfaces came with client-server, although not a requirement of the technology. In the last four years, the popular acceptance of Internet technologies has led to even more use of client-server, but based on completely public standards. Internet standards for messaging, presentation and mobile authentication have been instrumental in de-emphasizing the importance of individual workstations.

The recent promise of Java has been viewed by many as changing this again, as a workstation hosting a Java Virtual Machine is not even necessarily expected to have local storage. Java is not mature, although it is becoming so at a very rapid rate. Most major software companies are putting in big efforts to produce Java applications, but the bulk of mainstream applications are not yet represented in the Java marketplace. Now, in the first half of 1997, the dominant paradigm is still that of servers running a mixture of open and proprietary standards being used by unsecured, autonomous PC workstations.
Costs and Management Problems

There are some well-known problems with this model. Some of them are inherent in the PC hardware design, dating back to the original IBM PC. Others come from the nature of the Microsoft operating system software that most workstations run, which by default exposes all changeable parameters to anyone sitting at the computer. Still more stem from the autonomy of PC systems with respect to servers. There is also the inefficient way that hardware resources are dedicated to a single user and then underutilised for most of the time. In one way or another all these issues can be considered to contribute to the cost of providing IT services, and hence the term is used Total Cost of Ownership (TCO). This covers everything from buying equipment to paying consultants to help secure a PC network. There are companies which specialise in analysing TCO issues, and which claim to be able to help their customers save costs dramatically [2].

After many studies into the area there is general industry agreement that that the cost issues can be grouped into four categories, covering the whole life cycle of computing equipment. These take into account all the factors required to ensure smooth running of a PC LAN.
Capital costs

These costs are the ones usually that have been traditionally taken into account: initial purchase, software costs (including operating system) and depreciation. They are also the least significant component, and the one that can most readily be justified.
Technical Support

This covers the support required for day-to-day running of a LAN as the users see it, such as responding to printer queue and file server complaints, helpdesk activities, visiting faulty workstations and producing documentation. Commonly called First Tier support, staff at this level are able to assess problems before passing a proportion on to an administrative section.
Administration

General administrative issues exist besides the running of the LAN hardware. Asset management is an important task in a large organisation, keeping track of where the physical items are, what state they are in and who is using them. Often the fact that a traditional PC is modular and easily opened adds another dimension to the problem. Security issues are a major cost as well, with organisations becoming more vulnerable to misappropriated or unintentionally released information. A PC with a floppy drive, backup device and fully accessible hard drive is an open invitation to download and remove intellectual property. Auditing is another key element, particularly in financial, government and military networks which must know at all times what software and hardware is in use.

The technical administrative issues are often the most expensive of all, because in many cases there is no alternative. Keeping a distributed LAN performing involves long term capacity and redundancy planning as well as short term reconfiguration. Networking changes must follow departmental moves. The turnover of technology means that there is always upgrading and new installations to be done. Server administration is a costly operation, particularly in the usual situation of multiple platforms.
End user related

Training is a big cost, covering not only formal courses but informally as users get to know various systems. Companies that skimp on formal training pay more informally, and get a less consistent result which indirectly increases costs still more. Then there are the applications that the user sees which must be either purchased or developed in-house. The user has control over a local hard disk and must manage it to some degree. Not only is this a cost, but it opens the door to curious or well-meaning users doing unhelpful things. PC operating systems are easy to confuse and most users can achieve it in a few minutes!
A Continuum of Solutions

There are two sets of specifications (NetPC and NC) marking the pure extremes of the debate. Sensing an opportunity, other companies have quickly come forward and offered less ideological interpretations of each standard. There is no single clear-cut choice, because there are now products which fall midway between the two, leaning more to one model or the other.

In addition there are those companies with long-established products which are only a small software upgrade away from falling in the same category of solutions. bootix is one of these companies. Just how much each camp has in common is demonstrated by the fact that bootix can contribute technology and expertise to both sides without changing its core emphasis.

The companies with a stake in the Network Computer and NetPC debate claim that their products address the following design criteria:

* Cheaper and Easier to Maintain

This is achieved through centralised management, which allows rapid reconfiguration and diagnosis of workstations. It also permits security profiles to be applied and auditing information to be gathered more easily and reliably. All but one of the proposed solutions (the pure NetPC) offer the option of dispensing with local persistent storage, which has the advantage of eliminating many common problems.

* Run Windows Applications

This is essential to preserve the existing benefits of the huge installed base of software. Microsoft would prefer to see Intel machines installed that do this natively, others do it by emulation or remote access to a central application server. All but one of the products being sold or in development (the NC-1) support Windows applications.

* Run Java Applications and/or Applets

The world's large companies are indicating great support for Java as a preferred application delivery platform for the future. [3]

* Support TCP/IP and Internet Standards

While TCP/IP support has been a given for some years now, seamless integration of Internet standards for workstation functions such as initialisation, authentication and file operations is still not supplied automatically by vendors of PC systems software. While Novell and Microsoft are working on these issues by extending their own standards to an extent, the market has made it plain that Internet standards are the only ones universally acceptable, and these companies are paying heed.

Just because two things try to solve the same problems doesn't mean they are the same product! Categorising the approaches of the major suppliers helps to define the continuum.
The NC-1, Copycats and Derivatives

A consortium of companies (principally Acorn, Apple, IBM, Netscape, Oracle and Sun) have defined a basic set of features for their vision of a network computer. This is known as the Network Computer Profile 1 (NC-1), and the full specifications are available at http://www.nc.com/. The main items in the minimum configuration are a Java virtual machine and built-in support for TCP/IP and Internet protocols such as email, housed in a compact box of unspecified processor type. There is no local persistent storage, and all management takes place over the Simple Network Management Protocol (SNMP) Internet standard.

It is no wonder that Microsoft and Intel tried to brand this at first as little more than a modern-day version of a dumb terminal (besides the fact that it will not run Windows!) That is exactly what it is, with both applications and data designed to reside on the server. While it is a powerful new paradigm and there have already been sales for this sort of machine, most sites cannot consider it because it will not run Windows applications. The lack of a local disk also means it is likely that network infrastructure sufficient for a PC LAN will need to be upgraded to cope with the extra traffic that NCs generate by anything more than light use.
Other Interpretations of the NC

With the introduction of the NC however, traditional X terminal manufacturers started adding a Java Virtual Machine to their products to come up with a superset to the NC-1 standard, something which was capable of running Windows applications.

Then there are the X terminal manufacturers who have gone the other way, and added the ability to display Windows GUI screens as exported by an application server using either proprietary communications such as the Intelligent Console Architecture from Citrix or the or standard compressed X windows protocol. Examples are VXL Instruments http://www.vxl.co.uk/ and Wyse Technologies http://www.wyse.com. There are a range of Windows server products which present a Windows GUI over either or both protocols, including Citrix's Winframe, Insignia Solutions's Ntrigue, and Tektronix's WinDD.

Complicating the choices still further[4], there is TriTeal and their SoftNC, which is optimised to accept either X, ICA or to layer a Windows interface onto Java applications. It is itself written in Java and therefore capable of being run on most of the hardware described in this paper! A good overview of this class of solution can be found at http://www.ncworldmag.com/ncworld/ncw-02-1997/ncw-02-softnc.html#sidebar. Competing directly with TriTeal, Insignia has announced that their NTrigue client will soon be available as a Java applet and application, which will give all Java-capable clients (including Microsoft desktop operating systems) access to centrally distributed Windows applications.
Windows Applications Matter for Now

The problem with all of these alternative options for running Windows in an NC environment is that it just isn't native Windows. So much of Windows is based on historical APIs that there are always applications that do not work in a non-native environment, wanting to write to local executables and other nonsensical things. That is the way Windows is! The most common functions in spreadsheet and wordprocessors generally do work, but there can be no guarantee that all database and productivity applications will, especially those developed in-house. Then there are application clashes on the server, where (in some remote Windows server designs) certain applications will not share with others. No producer of Windows emulation software for any platform - and there are many, dating from the days of Windows 3.11 - will guarantee that their product will run all applications. This even goes for operating systems that run on Intel hardware, where they can take advantage of the 80x86's virtual machine mode. The lesson is that Windows emulation is something that needs to be tested extremely carefully with every application that will be required to run on it. Most sites find this a daunting and unsettling prospect.

For many sites, the final nail in the coffin of the NC-derived strategy for now is that running large numbers of clients to a remote Windows GUI server with a normal cross-section of applications takes simply huge servers by any standards. Gigabytes of main memory and the fastest processors available are required and to serve applications remotely to thousands of PC clients takes hundreds of such machines, a very expensive operation. However the possibilities of this technology are promising enough for several big-name makes of Intel PC hardware to have committed to producing NC-class machines. One example is Accton, see http://www.nc.com/pr_acct.html.

Many feel that until an organisation can afford to replace its entire PC infrastructure it should not consider anything other than Intel client machines. When the future of application delivery mechanisms has become clearer this may change.
NetPC Reference Platform

After 12 months of scepticism and a lot of negative marketing through 1995 and 1996, Microsoft and Intel were surprised to find performed a quick somersault and on October 28th, 1996 announced the joint NetPC initiative. Although both had professed to be addressing these issues via Intel's Wired for Management project and Microsoft's Zero Administration Initiative neither was progressing well until the NC appeared as a competitor. They were quickly joined by Intel-compatible hardware suppliers from around the world, including Compaq, Dell, Digital, Fujitsu, HP, ICL and Siemens-Nixdorf. The message is that this is a new kind of PC for organisations that demand it, and not meant in anyway to supplant the existing and highly successful fully autonomous PC.

Given the group of companies who wrote the specification, it is not surprising that it is principally about hardware. There is no suggestion anywhere of different kinds of applications, and very little mention of drivers. It describes a new kind of modular sealed box with no expansion slots, hosting a standard multimedia PC with a hard disk, Pentium-class chip, Plug-and-Play BIOS, network interface and Universal Serial Bus. While many of the hardware irritations of the existing PC design have been removed, it is still recognisable as a traditional PC. It is not a low-end PC either, so that none of the cost savings are meant to come from the initial purchase.
What is Different

The firmware specification is somewhat different to the traditional PC, but the advances are incremental. The latest version of the standard requires a flash ROM BIOS, with the option of incorporating remoteboot options that have until now been put on a chip in a standard network card. One small firmware item that has not appeared in PC specifications before is a unique ID number for each NetPC. This gives an opportunity for software suppliers to restrict IS departments to their choice of licensing models, as well as aiding in asset management and crime avoidance.

The NetPC specification is geared to Windows 97, however remoteboot developments planned for Windows NT 5.0 indicate that eventually it will be available for the NetPC architecture as well. There is a common driver model across both operating systems, which removes many of the dependencies. Remote shutdown and wakeup facilities are also standard, and the remoteboot control firmware is designed to be augmented by appropriate initial scanning and installation software. Intel's view of the NetPC can be found at http://www.intel.com/pressroom/archive/releases/nw31297a.HTM, and is very similar to Microsoft's.
MIS Appeal

Built-in network-management agent software is also specified, together with and integration with Web Based Enterprise Management (WBEM; for more on this, see "WBEM and JMAPI on the rise," Sunworld Online, November 1996 at http://www.sun.com/sunworldonline/swol-11-1996/swol-11-connectivity.html) However this will be a management package specific to Intel and Microsoft, based on WBEM and the Desktop Management Interface (see http://www.dmtf.org).

Overall, the NetPC is a very neat and attractive option for those organisations prepared to invest in Microsoft's preferred configuration management strategies. The hardware will undoubtedly be cheaper to run, and there is no question about application availability because it is still a PC. The controlled startup options and remote shutdown ability give IS management two of the most important benefits of dedicated terminals. The NetPC is tailor-made for organisations who can predict their needs at the desktop for long enough to invest in a PC that is not designed to be upgraded.
A Pragmatic Option: The Partial NetPC

A key statement summarising the view of the NetPC consortium can be found in the Microsoft press release:

The NetPC is a new member of the PC family that will reduce the costs of business computing by optimizing design for a particular class of task-oriented users that do not require the flexibility and expandability of the traditional PC.

In some networks, this very flexibility and expandability is used as a business advantage. Some of the largest and most respectable financial organisations in the world have a policy of upgrading their machines as much as possible in order to save costs. Simple security measures are widely available to minimise physical tampering, a major goal of the NetPC. The Universal Serial Bus (USB) is available on standard PCs from brand name manufacturers already, and the improved NetPC BIOS is likely to become standard on all PCs or at least available as an option. In fact, all the software and firmware benefits can be obtained separately. NetPC computers are a neatly packaged solution for those organisations that want exactly what it provides, but it is also very rigid. To some organisations, the NetPC specification is largely irrelevant except that it is spurring software and firmware development which will help them control their existing network of conventional PCs.
Remoteboot

For sites that have solved the physical access problems of PCs, the main NetPC advantages appear to be the proposed improved ROM BIOS, and the ability to remotely control the initialisation and boot process. However, this facility is already available on standard PCs, with add-on TCP/IP BOOT-PROMs. The NetPC consortium has purchased TCP/IP remoteboot technology from a leading manufacturer. These PROMs take over the boot process of the PC from the moment it is switched on, and give the same functionality as the NetPC. This means that along with the option of purchasing NetPCs sites with a large installed base of PCs can consider fitting PROMs to each machine. As an interim or long-term measure, this delivers the key functionality of the NetPC without paying for unwanted features, being restricted in hardware maintenance or losing investments in hardware.

Remote boot on a standard PC gives three options:

* Remotely-controlled boot somewhat like that proposed for the NetPC
* Remotely controlled installation and/or system repair at startup
* Diskless boot

The NetPC will certainly have a remoteboot option, and may offer remoteboot followed by OS repair and install as required. However, it is specifically not meant to be a diskless workstation.

The NC-1 camp has been derided by Intel and Microsoft for the lack of virtual memory in their machines, and point to the availability of cheap and fast local storage as an advantage. However, not all PC sites see things that way: for guaranteed and rapid control they are willing to spend more on network infrastructure and remove the local hard drive. Cheap PC hard disks are expensive to maintain on a large scale, and they are a definite security risk. They are also much harder to manage centrally, no matter how good the configuration management software.

Take the case of a company rolling out new version of a large Windows application to 20 000 desktops. Under the NetPC and standard PC local storage model if something goes wrong backing out of the installation is a major problem. Not only do hard disks take a long time to transfer 50 or 80 megabytes of application code but thousands of PCs doing this at once will monopolise the network. On the other hand, if the workstations are all diskless and connected to a fast LAN the rollout and backout happens as soon as soon as the high-performance server hard disks can be updated. This means an order of magnitude less file copying (at least), avoidance of messy installation on a Windows workstation and far better administrative control over the whole process. The downtime saved can equate to a large amount of money made.

A lot of effort has been put into making Windows 95 and Windows 97 reliably boot without a hard disk by third-party vendors. Largely abandoned by Microsoft (who has provided only poor support for the proprietary and inflexible RIPL protocol) there are now some serious remoteboot solutions available. Improvements to Windows NT Workstation version 5 planned by Microsoft to take advantage of the NetPC should also make it possible to boot NT diskless.
LAN Caching

In cases where a hybrid NetPC/normal PC solution is required and the network capacity is not sufficient then LAN caching can be an option. For PCs with or without a hard disk, a cache on the local segment can save large amounts of backbone traffic. The NetPC represents an opportunity for manufacturers of this technology. The market leader, Measurement Techniques, Inc in the US has announced support for the NetPC. Their product sheets and press releases can be found at http://www.lancache.com, but to quote part of their NetPC-related announcement:

Shared LAN Cache (SLC) software from Measurement Techniques provides critical performance enhancement technology to companies planning to standardize on the new NetPCs without upgrading their networks and file servers. This caching technology is not provided with Microsoft Windows 95, NT or Novell's new Client 32 products.

In the future, MTI anticipates some PC manufacturers will supply diskless NetPCs with Flash Memory which is many times faster than a hard disk for a local non-volatile cache. MTI also offers an SLC Server which presents a shared cache to a group of NetPCs. MTI plans to license it's Shared LAN Cache technology to a wide range of leading PC and enterprise network manufacturers.
Conclusions and the Future

This paper has explored many different answers to the same set of problems. To summarise, the PC network manager has a choice of three main directions:

* Buy Network Computers, and rethink the way that Windows applications are run
* Buy NetPCs, and exchange flexibility for known behaviour when using existing applications
* Keep buying ordinary PCs, and apply NetPC-type firmware and software to suit the organisation

Without a very large budget and top-level commitment to change, managers with mission-critical PC applications are likely to choose one of the latter two options. Where there is the opportunity to start from scratch the first option may well be a longer-lasting investment. A point worth remembering is that while a Network Computer will never become a PC, there is no reason why at some later date a PC or a NetPC cannot become an NC with an integral Java Virtual Machine. This is the same technology as the Intel-based NC manufacturers are developing now.

The nature of application development is likely to affect what happens next on corporate desktops. It could be that the promise of Java and write-once run-anywhere will be fulfilled, and that software producers will continue to support Java instead of or as well as Windows. If so, then the thin client will be here to stay, joining the PC on the corporate desktops of the world.


References

[1] William Barry < william.f.barry@dartmouth.edu>, Director of Administrative Computing at Dartmouth College presented a paper at the 1996 CAUSE annual conference. The text is available at

http://cause-www.niss.ac.uk/information-resources/ir-library/text/cnc9635.txt. In the introduction he says

In 1983 Warren McFarlan, in describing how companies face repeated cycles of new information technologies, wrote:

"While the company's use of a specific technology evolves over time, a new wave, that is a new technological advance, is always beginning, so the process is continually repeated....as the costs of a particular technology drop, overall costs rise because of the new waves of innovation."

Given the dramatic improvements in the price/performance ratio of computing hardware, as well as the instances where automation can produce substantial labor savings, it seems counter-intuitive that technological advances could be accompanied by an overall rise in costs. With all of the enthusiasm that often accompanies new IT, and the perennial proclamations of cost reductions and gains in programmer productivity, why is it that the total costs of computing seem to continue to rise?

[2] One prominent example is The Gartner Group (http://www.gartner.com) has a ten year history of offering consulting in TCO, and have developed well-regarded models. See http://www.gartner.com/consulting/tco.html, and also their range of current research on the topic available for purchase.

"The Cost of Ownership Helpdesk" showcases Interpose's solutions. See http://www.interpose.com/

[3] Client/Server Computing magazine has a feature article called "The Truth About NCs" by Gene Koprowski, March 1997. This can be found at http://www.sentrytech.com/cs037f1.htm.

According to a recent survey of Fortune 1000 companies by Forrester Research, 62 percent already use Java for some development. About 42 percent expect Java to play a strategic role in their company in the coming year.

[4] The NCWorld Magazine at http://www.ncworldmag.com also regularly covers this topic, analysing the latest newcomers to the technology soup.

 


Impressum | Datenschutz