Microsoft’s Windows NT Server continues to evolve as an enterprise-level operating system, but it is still below in position from Unix. NT isn’t likely to fill all your network operating system needs until some time in the next century.
Enterprises are deploying NT at an ever-increasing rate, but only in its traditional supporting role as a file, print and applications server. Unix is still superior in scalability, reliability, and management talents.
At the enterprise level, we do not see NT at all right now. The stability and scalability of Unix really prevents the use of NT in that space. But Microsoft keeps working at the enterprise level relentlessly. De-spite some well-publicized implementation gaffes and security breaches, NT’s advance into an enterprise platform proceeded steadily over the past year.
In a survey of users, file and print service was the primary reason companies were acquiring NT, followed by messaging and Internet access. In contrast, database hosting was the primary reason users acquired Unix systems, followed by file and print, custom applications and Internet access.
Although NT is enjoying enormous sales growth, the increase is coming from market expansion and displacement of other network operating systems – not at the expense of Unix.
The biggest technology gap between NT and Unix is on the scalability front. In a recent survey, 62% of IT managers at large organizations viewed NT as not scalable. Although NT theoretically supports up to 32 processors in a symmetric multiprocessing (SMP) system, NT couldn’t scale beyond two at the beginning of
There weren’t any eight-processor systems, and even scaling up to four was iffy. Now scaling up to four processors is quite practical.
Experts are not impressed with NT’s eight-processor performance, but we should point out that eight-processor boards are not yet available from Intel Corp. to test NT. Analysts assembled eight-processor NT machines by combining two four-processor boards.
While NT’s added scalability is a start, it still pales in comparison to what Unix can do. IBM’s AIX can run across a massively parallel 512-node system, and each node can be an SMP computer. In a single box, Solaris scales up to 64 processors and provides better linear scalability than NT at even the four and eight-processor levels.
At four processors, NT is scaling by a factor of about 1.6, so each additional processor is only adding about 60% of its stand-alone processing power. In contrast, Solaris scales by a factor of 1.8 to 1.9, or 80% to 90%.
This lets companies increase the power of a server as demands on that server grow. It is a lot cheaper to add processors and disk capacity to a single server than to put new servers in.
Despite these limitations, NT has made some impressive gains in scalability.
“In standard testing, we’ve gone from 2,100 transactions per minute and 1,800 concurrent users in October of 1995 to 16,000-plus transactions per minute and more than 14,000 concurrent users today,” says Ed Muth, group product manager of NT enterprise products at Microsoft.
However such numbers show the performance of a certain product with comparison to others. In real world capabilities, very few NT servers are handling 400 concurrent users. In contrast, some high-end Unix systems are supporting 30,000 concurrent users.
IBM set new Internet world records with its AIX- and RS/6000-based web site for the Winter Olympics in Nagano, Japan. The site handled 650 million hits during the 16-day event and reached a peak rate of 103,429 hits per minute.
The performance gap between Unix and NT may actually be widening, since there is almost no scalability limit on Unix.
NT’s 32-bit architecture poses another weakness. The ability to address memory in 64-bit chunks means more data can be kept in memory and disk access is reduced.
This very large memory (VLM) addressing boosts the performance of databases and data warehouses and enables them to scale much larger. The 64-bit architecture also increases I/O bandwidth so data can be transferred much faster.
Unix started evolving into a 64-bit operating system several years ago as developers added VLM extensions for database applications. Other 64-bit components followed, and SunSoft is adding the final piece: 64-bit virtual memory addressing.
Microsoft promised to deliver a complete 64-bit
version of NT when Intel’s first Merced chipsets ship in 1999. However,
this is not what is preventing deployment of enterprise databases and
data warehouses on NT.
A full 64-bit operating system is actually more important on the workstation side for intensive applications.
In any case, Intel and its OEMs appear to be securing their bets against a possible loss. Instead of relying entirely on Microsoft’s NT efforts, they are asking SunSoft to have a 64-bit version of Solaris ready for Merced when it ships. Intel has also promised to help The Santa Cruz Operation release a 64-bit version of UnixWare 7.
Reliability & Realities
Scalability issues become academic if a system can not be trusted to stay up all the time. Some Unix systems have been continually available for a few years, even throughout maintenance and upgrades. You can even change the IP address on a Solaris server without bringing it down, and SunSoft has promised live operating systems upgrades in the near future.
In the NT environment, systems have to be rebooted whenever changes are made to the Windows Registry or when memory leaks threaten to precipitate a server crash. NT programs don’t run for weeks without locking up. They are getting better at this, but it is not like Unix environments where programs almost never crash. NT hardware also tends to be less reliable than Unix platforms.
Microsoft’s one-size-fits-all approach has produced a general-purpose operating system that has grown from 16 million lines of code in Version 4.0 to 30 million in Version 5.0.
In their hurry to create an operating system that competes with Unix, NT developers have been inefficient in implementing their design. The current version of the more mature Solaris is a relatively lean 10 million lines of code. This makes it easier to maintain the code. In terms of reliability, having less complexity is like having fewer moving parts.
There is no facility in NT that tracks misbehaving applications and prevents memory leaks. Consequently, these applications may steal more and more memory until the system crashes. Unix can spot faulty programs before they crash and continue to provide uninterrupted service to other applications.
Sabre, a travel services company in Texas – USA, has two big travel applications: one on NT and one on Unix. “For speed and scalability, Unix remains the choice,” says Terrell Jones, CIO for the Sabre Group Holdings. “NT probably has a higher operational cost because of its hardware requirements and complexity, but we’re happy with NT security.”
Safety In Numbers
Microsoft last year addressed reliability by adding its Cluster Server software. Two NT servers can be linked to provide redundancy in case of a failure. However, the software does not provide a single-system image, and recovery takes 30 to 60 seconds.
For everything below the enterprise database level, availability is not an issue with NT if you select the right products and design, and implement an appropriate fault-tolerant architecture.
Memory leaks were a problem at first. They have been largely resolved by working with developers on fixes. Thus making the applications behave properly. In case of a market, for example, NT servers are rebooted after the market closes each day so as to free up consumed memory.
There is more to availability than clustering. Misbehaving applications cause most of the availability problems. Equally important are management facilities that provide advanced diagnostics, global directory, remote administration and online serviceability. This is where NT is way, way behind. There is still a lot that needs to be done .
Getting Past Insecurities
While opinions differ, the gap between NT and Unix does not seem to be so wide when it comes to security. In fact, some think NT has an inherent advantage because security attributes were built into it from the ground up. But for the time being, Unix has a jump on security simply because of its proven track record.
Unix started off as a relatively insecure environment back in the ARPANet days; security features were added gradually as the US government adopted the operating system and forced the issue. Kerberos was developed on Unix, and Unix vendors can now offer several levels of security.
However, users insist NT is quite secure, too, despite the recent denial-of-service attack on NT-based Web servers. Most of the victims were not protecting their NT servers with firewalls, and all failed to install the fix Microsoft had published prior to the attacks. Not surprisingly, most security breaches are caused by faulty security administration.
Operating System For The Masses
One NT feature that gets touted is ease of use. While Unix is programmer-driven, Microsoft has always focused on making things easier for users. NT’s network configuration is completely GUI-based and the majority of IS professionals are more comfortable in Windows than in Unix.
However, NT’s entry point is so slow that it is causing problems with some big NT rollouts. NT’s greatest strength is also its greatest weakness. The first rung of the ladder is so close to the ground that anyone can get on it.
Many administrators have a desktop or workgroup orientation and lack the enterprise skills they need to deploy NT on a large scale. Some try anyway and make a mess, and the failures get chalked up to NT scalability problems. People who don’t have an enterprise perspective are pushing NT too far in terms of scalability and reliability.
Ultimately, NT’s success on Unix depends on application support. Microsoft has always been great at rallying independent software vendors around the Windows platform.
Enterprise-level software developers, such as Oracle Corp. are moving aggressively into the NT space. So are smaller players.
The third-party software developers are much more comfortable in the NT environment today.
IBM gave NT a big vote of confidence when the systems giant announced the porting of IBM’s TXSeries transaction-processing middleware to NT. It is being bundled into a high-end software suite, code-named Bartoldi. These are the crown jewels, and IBM is bringing them to NT.
the same economic weapons that won Microsoft the desktop are rapidly
securing for NT the crucial middle tier, where most business-logic
programming takes place. Microsoft’s success here is forcing even the
most committed Unix shops to deploy some NT servers.
A Matter Of Time
To move into the top tier of the enterprise application arena, Microsoft needs to scale its business model up alongside NT. The company is geared toward selling millions of cheap desktop units anonymously through a distribution channel. In the Unix market, a single enterprise system might cost $5 million and would involve an ongoing service and support relationship with the customer.
For now, “it is frankly not our goal to compete with the top 1% to 2% of scalable systems”, says Mark Hassel, NT server product manager at Microsoft.
Meanwhile, count on the Unix community to keep raising the technological bar. Much of what ails NT is simply its age. Unix has been maturing for decades while 5-year-old NT is barely out of the toddler stage.