iSimulative Performance Evaluation for the Design of Distributed Systems
Dissertation ¨ Der Wirschaftswissenschaftlichen Fakultat der ¨ ¨ Universitat Zurich zur Erlangung der Wurde eines Doktors der Informatik ¨
vorgelegt von
Peter Lukas Weibel von Gelterkinden BL und Schongau LU
genehmigt auf Antrag von
Prof. Dr. Lutz H. Richter Dr. Reinhard Riedl
Dezember 2004
DieWirtschaftswissenschaftliche Fakult¨t der Universit¨t Z¨rich, Lehrberea a u ich Informatik, gestattet hierdurch die Drucklegung der vorliegenden Dissertation, ohne damit zu den darin ausgesprochenen Anschauungen Stellung zu nehmen. Z¨rich, den 8. Dezember 2004* u Der Lehrbereichsvorsteher: Prof. Dr. Martin Glinz
* Datum der Promotionsfeier
iii
Abstract
Performance evaluations have mostly beenmeasurements to determine the processing speed of a system or component. For the case of distributed systems the performance is often only tested when the system is used in either a test environment or even in the productive environment. It is only then that real usage scenarios, real amounts of data, and real e?ects of work load and disturbances are present and thus measurements realistic. Manymodern approaches allow the realization of all kinds of design conceptions for distributed systems. Only few of them seriously consider the performance aspect. In this thesis we present an approach that allows statements about usefulness and consequences of design conceptions for a system from the performance perspective even before the system has been realized or changed. The intention is acomplement for systems design, not an examination after completion of a system’s realization. The core of our approach is an evaluation process that is closely integrated with the design process for a distributed system. The design model created there is translated into an evaluation model to be examined. The aim is to allow statements about resource usage, response time, and other performance indicatorsfor the system’s performance to ?nd out whether the chosen system architecture can satisfy the requirements. Di?erent usage scenarios can be used to do that. Once an evaluation model is created, evaluation strategies are applied to gain knowledge about its performance. We present di?erent strategies in this dissertation thesis. The so-called Cold Start Protocol, e.g., is a simple strategy toe?ciently determine a throughput maximum for simple cases. More complex strategies have to be applied if the system usage is complex; they typically rely on the more simple strategies for their own realization. The strategies are the core of our research. We use them to test hypotheses and to perform learning processes. They allow an evaluation system to execute standard tasks of performance evaluationwithout necessarily being controlled by an expert. A tool implementing these strategies is a means for designers to examine their design decisions by executing an evaluation, and even to compare alternatives directly. Even simple examinations of scalability are possible with this approach. The strategies are realized using variation of speci?c parameters of the evaluation models. The variationsrefer to user-determined model
iv parameters. The strategies determine individual con?gurations for which a simulation experiment is executed. As a result of the simulation series, the strategies are able to determine the e?ects of the variation. Finally, the results are presented in a suitable way, most as graphic representation. This representation in most cases contains the results of multipleexperiments. It is aimed to facilitate the interpretation, and to support the users to draw the right conclusions from the evaluation.
v
Zusammenfassung
Performance-Evaluationen sind normalerweise Messungen, mit denen eine Verarbeitungsgeschwindigkeit eines Systems oder einer Komponente nachgemessen wird. Die Performance verteilter Systeme wird h¨u?g erst untersucht, a wenn die Systeme…