|

Typically, cooperating applications can be categorized as either a client or a
server. The client application requests services and data from the
server, and the server application responds to client requests. Early two-tier
(client/server) applications were developed to access large databases, and incorporated the rules used to manipulate the data with the user interface into the client
application. The server's task was simply to process as many requests for data storage and retrieval as
possible.
Two-tier applications perform many of the functions of stand-alone
systems: They present a user interface, gather and process user input, perform the requested
processing, and report the status of the request. This sequence of commands can be repeated as many times as
necessary. Because servers provide only access to the data, the client uses its local resources to perform most of the
processing. The client application must contain information about where the data resides and how it is organized in the
database. Once the data has been retrieved, the client is responsible for formatting and displaying it to the
user.
One major advantage of the client/server model was that by allowing multiple users to simultaneously access the same application
data, updates from one computer were instantly made available to all computers that had access to the
server. However, as the number of clients increased, the server would quickly become overwhelmed with client
requests. Also, because much of the processing logic was tied to a monolithic suite of
applications, changes in business rules led to expensive and time-consuming alterations to source
code. Although the ease and flexibility of two-tier products continue to drive many small-scale business
applications, the need for faster data access and more rapid developmental timelines has persuaded systems developers to seek out a new way of creating distributed
applications.

|
|