What is “Client/Server Computing”?
By J. W. Rider
J. W. Rider Consulting
“Client/Server Computing” has been defined in many ways. For our purposes, client/server computing is a software engineering technique often used within distributed computing that allows two independent processes to exchange information, through a dedicated connection, following an established protocol.
For reasons unknown, people will look at this definition, but then they will go on and read things into the definition about client/server that are just not warranted. Let’s consider some of these interpretations and clarify them.
Client/Server computing is NOT having one computer set up as a server and having another computer set up as a client. This is a case of over-specialization, confusing terminology with something that is merely an example. Client/server (C/S) is far broader. To be sure, C/S often is included when a server machine communicates with client machine. However, two separate machines are not a requirement. Client and server software components also can communicate with each other when they are installed on the same computer.
In addition, trying to figure out which computer is a client and which is a server can be a problem. Some computers may be set up to be clients to some servers and, simultaneously, to be servers to some clients. This approach is common in the middle tier of a Three-Tier Distributed Architecture. The computers in the middle tier are often called “application servers,” a term that obscures their part-client nature. Client/server could have been called “humpty/dumpty,” which would possibly have eliminated a lot of confusion about the technique involved. However, for historical reasons, this form of distributed computing has become known as “client/server.” Conventional usage of the two terms has the “server/provider/responder” as a piece of software that “listens” and “waits” until a message is a received from a “client/customer/requestor” that spurs the server into action.
Client/Server computing is NOT any old distributed computing. This is a case of over-generalization, when a term is confused with a broader subject. In fact, client/server may be used in non-distributed computing, but most of the applications of client/server are indeed distributed. At the application layer in distributed computing, the “business-aware” components are transferring business-related information between processes that may or may not reside on the same computer. In a well-designed distribution computing system, the information seems to be transmitted directly. However, the application layer components don’t really send the information directly. Instead, they rely upon lower level components to package the information appropriately and to broadcast the information through the local area network (LAN). The lower levels provide the “communication transparency” that makes it seem like the business-aware components are in direct contact with one another. The application layer components are not concerned with how the lower levels perform the task, as long as it is done transparently. Distributed computing could be accomplished without client/server (but the modern practice is indeed to use C/S).
Client/Server computing is NOT concerned directly with communications over the network. That is, C/S is not interested in the intricacies of how one machine communicates with another, or with how the bits are pushed through the coax. Most client/server software components are set up to communicate with a peer process (having little, if anything, to do with “peer-to-peer” architecture) over a virtual, direct channel. The very same network components may be used by both client and server software components.
What client/server computing does concern itself with is the nature of communications over that virtual, direct channel that connects a single “client” (or “humpty”) process with a single “server” (or “dumpty”) process. These two software processes may reside on different machines, or they could be installed on the same machine. This communication occurs at the transport layer of the ISO Open System Interconnectivity (OSI) model.
When a client/server connection is made, information may flow both ways through the virtual channel that connects the two software processes. The terms “client” and “server” applies to roles that the software components assume during a single connection session. One of the software processes adopts the role of “server” and waits for connections to be made. The other process adopts the role of “client” and makes the connection with the “server.” How the physical connection is made goes beyond the scope of the client/server connection. Once the connection has been established, either software component may pass messages to the other. However, this must be accomplished in accordance with the interface protocol. The client may only send messages that have been implemented for sending by the client to be received by the server; the server sends only “server-to-client” messages.
In contrast with the other techniques available in distributed computing, client/server is unique in specifying the two separate roles to be used to exchanging information. In the file transfer approach (which has NOTHING to do with the specific client/server-oriented file transfer protocol, FTP), information flows one way through the channel, and the session is terminated. In the peer-to-peer approach, both components are designed to submit and respond to the same messages.
Of course, as you start to see how the client/server approach is different from the other two approaches, you may also recognize that both file transfer and peer-to-peer communications could be implemented in client/server terms, that file transfer and peer-to-peer could be special cases of the client/server approach. This is true, but it’s important to recognize that the other approaches could be implemented differently from client/server as well. It is that flexibility in the client/server architecture for incorporating the other architectures that has led the client/server approach to become preeminent among the distributed computing implementations.