Benchmarking WO Performance
|Recently, I wrote an article for general consumption about scaling
Webobjects to support more and more users in gradual stages. As
I was writing this article, I realized that neither I nor anyone
else really had any idea of what WO performance was like on various
platforms. Since WO performance is a combination of a number of
factors, I think its important to come up with a set of benchmarks
for evaluating WO performance.
I would like to see a set of benchmarks that isolated each stage in the pipeline. These benchmarks could then be used calculate the effects of an improvement made to any specific section of the pipeline.
So here is my proposal:
Benchmark #1: Raw HTML Speed
The purpose of this benchmark is to isolate the raw speed of the webserver itself. This speed can then be used an upper limit, presumably, a WO application can never do better then this value. This benchmark is simple: download the index.html page and associated images from the sample index.html provided by the apache web server. The number of pages per minute should be reported as the benchmark value. The test should free run for 10 minutes to allow the web server to cache anything it was going to cache then the average pages per minute over the next 10 minutes should be reported.
Benchmark #2: Raw WO Speed
The purpose of this benchmark is to isolate the raw speed of WebObjects without any database transactions or complex calculations. This value should return the upper limit of speed obtainable by a WO application on the given platform. This test should be performed in both WebScript, Java and Objective-C on the platform being tested. This test is very simple: using the "Hello World" demo application as the Web Objects application, download the result page as many times as possible and report the number of pages per minute. The test should free run for 10 minutes to allow WO to cache anything it was going to cache, then the average pages per minute over the next 10 minutes should be reported.
Benchmark #3: Raw WO & EOF Speed
The purpose of this benchmark is to isolate the raw speed of WebObjects when performing EOF transactions. Since the database in this test is small, and easily cached, this benchmark should report the upper limit of speed obtainable by a WO application performing database queries. That is, the purpose of this test is to isolate the WO->EOF loop. This test should be performed in both WebScript, Java, and Objective-C on the platform being tested. It should be performed once for each database available on that platform. EOF caching should be on, and fully enabled.
Specifically, the test shall consist of pulling the MovieDetails page from the Getting Started with WebObjects tutorial. Like the other tests, the test should run free for 10 minutes, then the average pages per minute over the second 10 minutes should be reported.
Benchmark #4: WO->EOF->Database speed
The purpose of this benchmark is to isolate the speed of the WebObjects to EOF to database loop. This benchmark is the same as Benchmark #3, only in this case EOF caching is NOT enabled so that EOF must go to the database for each query. However, any caching in the database should be enabled since the purpose of this test is to test the pipeline from WebObjects to EOF to the database, not the raw database speed.
Benchmark #5: WO->EOF->Database write speed
The purpose of this benchmark is to measure the speed of modifications to the database. For this benchmark, the Movies sample application is used as in benchmark 4 and 5, but in this case, new movie titles are randomly generated and added to the database. Any and all caching may be enabled. Like the other tests, the test should run free for 10 minutes, then the average pages per minute over the second 10 minutes should be reported.
Benchmark #6: "Typical" browse-only speed.
This is the most complicated benchmark, and the one most generally useful with analysis. In this case, Apple's nile.com sample application is used as an example WebObjects application. Randomized user sessions of 5-10 page requests are generated and used against the application, with the average pages per minute being reported. These sessions should only "read" from the site, no orders should actually be submitted. Any and all caching should be enabled for this benchmark, and the application should free run for 1 hour before measurement begins with the average pages per minute for the next 30 minutes being reported. The purpose of this test is to generate a "typical" performance number for a given configuration. This number could then be used as a conservative estimate of performance for a "read-only" WebObjects application.
Benchmark #7: "Typical" commerce speed.
This benchmark is exactly the same as the previous benchmark but in this case, each randomized user session ends with a purchase transaction. The purpose of this test is to generate a conservative estimate of performance for a web-commerce WO application.
Benchmarks #6 and #7 assume that Apple will make available the nile.com web application they used for the Ziff-Davis benchmark. The main reason I chose the nile.com web application is that presumably, Apple had spent some time optimizing it, and also, presumably, the database used was large enough that caching by EOF and the database would be useful, but wouldn't dominate the benchmark results.
Also, some standardized way of pounding on the WO application is needed...
The purpose of this article was not to create a full blown benchmark from whole cloth, but rather to start a discussion of how to do such a benchmark. If the WO community creates a benchmark suite, that will give hardware and database vendors a target to search for and a standard to measure themselves against.
Pierce T. Wetter III, May 2nd, 1999