A method of increasing the reliability and scalability of web applications using distributed caching
This paper presents a new approach to improving the reliability and scalability of web applications by implementing distributed caching. Traditional centralized caching systems are often limited by a single point of failure and limited scalability, which makes them vulnerable to heavy loads and rapid traffic growth. To solve these problems, the proposed method uses a distributed architecture with multiple proxy servers that dynamically balance the load and replicate cache data across the network. This design provides high availability due to failover capabilities, where backup servers can maintain continuous service during individual server failures. The distributed caching approach simplifies scaling, allowing additional servers to be seamlessly integrated to meet growing user demand without significant infrastructure changes. Experimental results show that this method improves response time, reduces server downtime, and optimizes resource utilization, making it a reliable solution for modern, high-traffic web services. Traditional caching methods, such as centralized caching, often face load and single point of failure issues, which limits the system’s ability to quickly adapt to changes in traffic and requests. To address the issues mentioned above, the distributed caching proxy model proposes to distribute cached content across multiple servers, increasing availability by eliminating a single point of failure. It also improves reliability by balancing the load across multiple servers, preventing any single server from being overloaded.
Keywords: distributed caching, web application scalability, centralized caching, load balancing