Please use this identifier to cite or link to this item:
|Title||Efficient Adaptive Load Balancing Algorithm for Cloud Computing Under Bursty Workloads By Sally Fouad Issawi Supervised By: Dr. Alaa|
|Title in Arabic||انشاء خوارزمية لتوزيع الاحمال للحوسبة السحابية تحت عبء العمل المتقطع|
Cloud computing is a recent emerging technology in IT industry. It is an evolution of previous computing models such as grid computing. It enables a wide range of users to access a large sharing pool of resources over the Internet. In such complex system, there is a tremendous need for efficient load balancing scheme in order to satisfy peak user demands and provide high quality of services. Load balancing is a methodology to distribute workload across multiple nodes over the network links to achieve optimal utilizing of resources, minimizing data processing time and response time, and avoid overload. One of the challenging problems that affect the load balancing process is bursty workload. Burstiness occurs in workloads in which bursts of requests aggregate together during short periods of time and create periods of peak system utilization. This can lead to dramatically degradation in system performance. Several load balancing algorithms had been proposed which focus on key elements such as processing time, response time and processing costs. However these algorithms neglect cases of bursty workload. In the same time, research works which deals with the problem of bursty workload are quite a few. Motivated by this problem, this research comes to handle the load balancing problem in cloud computing under bursty workload by predicting the variation in the request rate and apply the suitable load balancing algorithm according to the predicted load status. In turn, the selected load balancing algorithm assign the received request to a virtual machine based on information supplied by a fuzzifier. The proposed algorithm has been tested and the experiments results showed that our algorithm improves the cloud system performance by decreasing the response and the data center processing time compared with other algorithms. The decrement is about 2ms when the instruction length is 250 Byte, while this improvement becomes more obvious by decreasing the response time about 10ms and the processing time about 5ms when the instruction length is increased to 1000 Byte.
|Publisher||الجامعة الإسلامية - غزة|
|Files in this item|