
Understanding Siege
Siege is a load testing and benchmarking tool that allows us to further analyze our web server's performance. Let's begin by installing Siege inside our Docker container.
From the container's command line, please download and decompress version 4.0.2 of Siege:
# wget -O siege-4.0.2.tar.gz http://download.joedog.org/siege/siege-4.0.2.tar.gz # tar -xzvf siege-4.0.2.tar.gz
Then, please enter Siege's source code directory to compile and install the software:
# cd siege-4.0.2 # ./configure # make # make install
For these tests with Siege, we will be using the -b, -c, and -r switches. Here are the definitions of these switches:
- -b, enables benchmark mode, which means that there are no delays between iterations
- -c, enables concurrency in order to perform multiple requests at the same time
- -r, determines the number of requests to perform with each concurrent user
Of course, you can get more information on Siege's command-line options by invoking the manual from the container's command line:
# man siege
Now launch a Siege benchmark test:
# siege -b -c 3000 -r 100 localhost/index.html
You will then get a benchmark test report like this one:

The Siege benchmark report confirms the results that were obtained from AB
As you can see, the results match those that we got from AB previously. Our test shows a transaction rate of almost 800 transactions per second.
Siege also comes with a handy tool named Bombard that can automate tests and help to verify scalability. Bombard allows you to use Siege with an ever-increasing number of concurrent users. It can take a few optional arguments. These are: the name of a file containing URLs to use when performing the tests, the number of initial concurrent clients, the number of concurrent clients to add each time Siege is called, the number of times Bombard should call Siege and the time delay, in seconds, between each request.
We can, therefore, try to confirm the results of our previous tests by issuing the following commands inside the container:
# cd /srv/www
# touch urlfile.txt # for i in {1..4}; do echo "http://localhost/index.html" >> urlfile.txt ; done # bombardment urlfile.txt 10 100 4 0
Once done, you should obtain a report similar to the following one:

The results show that the longest transaction is much higher when there are 210 or more concurrent users
Try again, but by requesting the PHP file:
# echo "http://localhost/index.php" > urlfile.txt # for i in {1..3}; do echo "http://localhost/index.php" >> urlfile.txt ; done # bombardment urlfile.txt 10 100 4 0
This test should provide results similar to these:

The efficiency of serving dynamic content is analogous to that of serving static content, but with a much lower transaction rate
The second Terminal window that is running top is now showing 50% usage of both of the available processors and almost 50% RAM usage on my computer:

The container’s usage of CPU and memory resources when it is submitted to benchmarking tests
We now know that, when there are not many concurrent requests, this hardware can allow for good performance on a small scale, with 800 transactions per second on static files and about 200 transactions per second on pages that have dynamically generated content.
Now that we have a better idea of the base speed performance of our web server based solely on our hardware's resources, we can now start to truly measure the speed and efficiency of the web server's dynamically generated content through profiling. We will now proceed to install and configure tools that will allow us to profile and optimize PHP code.