Load balancing is a network computing method for distributing traffic and workload across multiple resources, such as servers, other network computers, CPUs and even disk drives. The goal of load balancing is to optimize the use of network resources, minimize response time, maximize data throughput and prevent the overloading of any single network resource.
The use of multiple hardware components instead of a single resource also can increase reliability of a network and it component parts by introducing resource redundancy. Load balancing typically is accomplished by use of a dedicated hardware or software system, such as a Domain Name Server process or a multilayer switch.
Among the most common means of load balancing is the use of multiple servers (often referred to as a server farm) to operate a single website or other solitary Internet service. In such server farms, typically a software program is used to monitor the port where external clients connect to the network to access its services, then relays requests to one of the backend servers, preventing clients from being able to connect directly to the server, and enabling the software to allocate resources to different servers in order to avoid overloading any single server or resource.