Wednesday, August 3, 2016

PDQ as a Performance Periscope

This is a guest post by performance modeling enthusiast, Mohit Chawla, who brought the following very interesting example to my attention. In contrast to many of the examples in my Perl::PDQ book, these performance data come from a production web server, not a test rig. —NJG

Performance analysts usually build performance models based on their understanding of the software application's behavior. However, this post describes how a PDQ model acted like a periscope and also turned out to be pedagogical by uncovering otherwise hidden details about the application's inner workings and performance characteristics.

Some Background

A systems engineer needed some help analyzing the current performance of a web application server, as well as its capacity consumption. Time series data and plots provided a qualitative impression for this purpose; mostly to sanity-check the data. While the act of observing and interpreting information contained in these time-series data was helpful for forming an empirical understanding of traffic patterns and resource utilization of the application, it wasn't sufficient to make an accurate judgement about the expected performance of the web server upon changing the application configuration or system resources, i.e., real capacity planning.

In fact, what was needed was some kind of "periscope" to see above the "surface waves" of the time-series performance data. In addition, a more quantitative framework would be useful to go beyond the initial qualitative review of the time-series. All of this was needed pretty damn quick! ... Enter PDQ.

Thursday, July 28, 2016

Erlang Redux Resolved! (This time for real)

As I show in my Perl::PDQ book, the residence time at an M/M/1 queue is trivial to derive and (unlike most queueing theory texts) does not require any probability theory arguments. Great for Guerrillas! However, by simply adding another server (i.e., M/M/2), that same Guerrilla approach falls apart. This situation has always bothered me profoundly and on several occasions I thought I saw how to get to the exact formula—the Erlang C formula—Guerrilla style. But, on later review, I always found something wrong.

Although I've certainly had correct pieces of the puzzle, at various times, I could never get everything to fit in a completely consistent way. No matter how creative I got, I always found a fly in the ointment. The best I had been able to come up with is what I call the "morphing model" approximation where you start out with $m$ parallel queues at low loads and it morphs into a single $m$-times faster M/M/1 queue at high loads.

That model is also exact for $m = 2$ servers—which is some kind of progress, but not much. Consequently, despite a few misplaced enthusiastic announcements in the past, I've never been able to publish the fully corrected morphing model.

Previous misfires have included:

  • Falsely claimed in this 2008 blog post. There, I show a Table of how complicated the Erlang B and C functions are when expressed as rational functions of $\rho$ (the per-server utilization) for $m = 1, 2, \ldots, 6$ service facilities.
  • As a side note, it's precisely those impenetrable polynomials with unfathomable coefficients that completely put me off the approach I am about to describe here.
  • In the Comments section of that same post, Boris Solovyov asked in 2013: "Did you ever write this out in full?" I sheepishly had to report that I hadn't had time to pursue it (which is mostly true). I hope he reads this post.
  • A more intuitive explanation of the motivation for the morphing model was given in this 2011 blog post, but no advance on the long sought-after correction terms.
  • Falsely claimed again in this 2012 blog post. There's a photo of one of my whiteboards, deliberately kept inscrutably small—a good choice, as it turned out.

I think I know how Kepler must've felt. The difference between an M/M/1 queue and an M/M/m queue is like the difference between a circle and an ellipse. Just by changing the width slightly, the calculation of the perimeter becomes enormously complicated—in fact, it doesn't even have an analytic solution! Now, however, I believe I finally have the correct approach for sure. Really!... Yep... Uh huh... No, seriously. Well, see for yourself

The Starting Point

Start with the morphing approximation for the M/M/m waiting time in my Perl::PDQ book—Chap. 2 in the original 2004 edition or Chap. 4 in the 2nd 2011 edition. \begin{equation} \Phi_W^{approx} (m, \rho) \, = \, \frac{1}{\rho^{-m} \, \sum_{k=0}^{m-1} \rho^k} \label{eqn:morphfun} \end{equation} This definition has a denominator involving a truncated geometric series in $\rho$. It is used to derive the Guerrilla approximation for the residence time at an M/M/m queue with mean service time $S$ \begin{equation*} R^{approx} \, = \, \frac{S}{1 - \rho^m} \end{equation*} which is exact for $m = 1, 2$. The exact general formula for the residence time is given by \begin{equation*} R^{exact} \, = \, S \; + \; \frac{C(m, \rho)}{m( 1 - \rho)} \, S \end{equation*} where $C(m, \rho)$ is the Erlang C function defined in (\ref{sec:realEC}).

Five Easy Pieces

The following 5 steps provide the corrections to the approximation in (\ref{eqn:morphfun}) and result in the exact Erlang C function without resorting to the usual probability arguments found in almost all queueing theory textbooks. Along the way, and quite unexpectedly (for me), I derive the Erlang B function as an intermediate result. Originally, I thought I would have to introduce that function as a kind of axiomatic probability. I wasn't thinking of it as a major waypoint. As far as I'm aware, this derivation has never been presented before because you have to be sufficiently perverse to have thought up the morphing approximation in the first place.
  1. Replace $\rho$ in (\ref{eqn:morphfun}) by the traffic intensity $a = m \rho$ \begin{equation} \Phi_W^{approx} (m, a) \, = \, \frac{1}{a^{-m} \, \sum_{k=0}^{m-1} ~a^k} \label{eqn:morpha} \end{equation}
  2. Convert $\Phi_W^{approx}$ to a truncated exponential series by applying $a^n \mapsto a^n / n!$ \begin{equation} \frac{1}{m! a^{-m} \, \sum_{k=0}^{m-1}~\frac{a^k}{k!}} \end{equation}
  3. Extend the summation over all $m$ servers to yield the famous Erlang B function for an M/M/m/m queue \begin{equation} B(m, a) \, = \, \frac{\frac{a^m}{m!}}{\sum_{k=0}^{m}~\frac{a^k}{k!}} \label{eqn:EB} \end{equation} Historically, $B(m, a)$ has been associated with the probability that an incoming telephone call is blocked and lost from the system, e.g., gets an engaged signal. I can't remember the last time I heard an engaged signal. I think they've been replaced by voice-mail and Muzak.
  4. Using the more compact notation: $A_m = a^m / m!$ and $\Sigma_k$ for the reduced sum over $(m - 1)$ terms, rewrite (\ref{eqn:EB}) as \begin{equation} B(m, a) \, = \, \frac{A_m}{A_m + \Sigma_k} \end{equation}
  5. Scale $A_m$ by $(1 - \rho)^{-1}$ to introduce the infinite possible wait states and arrive at \begin{equation} \Phi_W^{exact} (m, a) \, = \, \frac{1}{1 \, + \, (1 - \rho) \, m! \, a^{-m} \, \sum_{k=0}^{m-1}~\frac{a^k}{k!}} \label{eqn:myEC} \end{equation} which is the fully corrected version of (\ref{eqn:morphfun}).      Q.E.D.
Furthermore, it can be shown that (\ref{eqn:myEC}) is identical to the famous Erlang C function \begin{equation} C(m, a) \, = \, \frac{ \frac{a^m}{m!} \, \big(\frac{m}{m-a}\big) }{1 \, + \, \frac{a}{1!} \, + \, \frac{a^2}{2!} \, + \, \cdots \, + \, \frac{a^{m-1}}{(m-1)!} \, + \, \frac{a^m}{m!} \, \big(\frac{m}{m-a}\big) } \label{sec:realEC} \end{equation} Historically, $C(m, a)$ has been associated with the probability that an incoming telephone call must wait to get a connection (aka "call waiting"), rather than being dropped, as it is in B(m, a). However, since $C(m, a)$ determines the mean waiting time in any M/M/m queue, it applies to any multi-server system, e.g., modern multi-threaded applications.

Equation (\ref{sec:realEC}) was first published by A. K. Erlang in 1917 (he of the Copenhagen Telephone Company), along with (\ref{eqn:EB}). Thus, next year will be its centennial. Nice timing on my part (although that was never the plan—there never being any plan).

It's worth emphasizing that the morphing approximation (\ref{eqn:morphfun}) accounts for about 90% of what is going on with $R^{exact}$ in the M/M/m queue. The remaining 10% contains the minutiae regarding how the waiting line actually forms. But, as you can see from the above transformations, it's a rather subtle 10%.

Next Steps

This is just a sketch proof. I've suppressed a lot of details because there are many and I have a 100 pages of typeset notes to back that up. I know a lot about what doesn't work. (Sigh!) Now that I have the correct mathematical logic sketched out, I'm quite confident that it can also be supplemented with a more visual representation of how the corrections to the morphing function (\ref{eqn:morphfun}) arise.

Wednesday, June 8, 2016

2016 Guerrilla Training Schedule 2016

After a six month hiatus working on a major consulting gig, Guerrilla training classes are back in business with the three classic courses: Guerrilla Bootcamp (GBOOT), Guerrilla Capacity Planning (GCAP) and Guerrilla Data Analysis Techniques (GDAT).

See what graduates are saying about these courses.

Some course highlights:

  • There are only 3 performance metrics you need to know
  • How to quantify scalability with the Universal Scalability Law
  • Hadoop performance and capacity management
  • Virtualization Spectrum from hyper-threads to cloud services
  • How to detect bad data
  • Statistical forecasting techniques
  • Machine learning algorithms applied to performance data

Register online. Early-bird discounts run through the end of July.

As usual, Sheraton Four Points has bedrooms available at the Performance Dynamics discounted rate.

Tell a friend and see you in September!

Saturday, May 14, 2016

PDQ 7.0 Dev is Underway

The primary goal for this release is to make PDQ acceptable for uploading to CRAN. This is a non-trivial exercise because there is some legacy C code in the PDQ library that needs to be reorganized while, at the same time, keeping it consistent for programmatically porting to other languages besides R—chiefly Perl (for the book) and Python.

To get there, the following steps have been identified:

  1. High Priority

    1. Migrate from SourceForge to GitHub.
    2. Change the return type for these functions from int to void:
      • PDQ_CreateOpen()
      • PDQ_CreateClosed()
      • PDQ_CreateNode()
      • PDQ_CreateMultiNode()
      Using the returned int as a counter was deprecated in version 6.1.1.
    3. Convert PDQ-R to Rcpp interface.
    4. Clean out the Examples directory and other contributed code directories leaving only Examples that actually use the PDQ C library.
    5. Add unit tests for PDQ C library, as well as the Perl, Python, and R languages.
    6. Get interface accepted on CRAN
    7. Add the ability to solve multi-server queueing nodes servicing an arbitrary number of workloads.

  2. Low Priority

    1. Get interface accepted on CPAN and PyPI.
    2. Convert to build system from makefiles to Cmake.

Stay tuned!

—njg and pjp

Friday, May 13, 2016

How to Emulate Web Traffic Using Standard Load Testing Tools

The following abstract has been submitted to CMG 2016:

How to Emulate Web Traffic Using Standard Load Testing Tools

James Brady (State of Nevada) and Neil Gunther (Performance Dynamics)

Conventional load-testing tools are based on a fifty year old time-share computer paradigm where a finite number of users submit requests and respond in a synchronized fashion. Conversely, modern web traffic is essentially asynchronous and driven by an unknown number of users. This difference presents a conundrum for testing the performance of modern web applications. Even when the difference is recognized, performance engineers often introduce virtual-user script modifications based on hearsay; much of which leads to wrong results. We present a coherent methodology for emulating web traffic that can be applied to existing test tools.

Keywords: load testing, workload simulation, web applications, software performance engineering, performance modeling

Related blog posts:

  1. Emulating Web Traffic in Load Tests
  2. Mapping Virtual Users to Real Users
  3. How to Extend Load Tests with PDQ

Tuesday, September 29, 2015

Remember the Alamo at CMG 2015

The Alamo is a reference to an episode in Texan history about defeat and revenge. But, there's nothing defeatist or mythical about the sessions I'll be giving at CMG in San Antonio this year.

Workshop: How to Do Performance Analytics with R, Mon Nov 2, 8-12am

You've collected cubic light-years of performance monitoring data, now whaddya gonna do? Raw performance data is not the same thing as information, and the typical time-series representation is almost the worst way to glean information. Neither your brain nor that of your audience is built for that (blame it on Darwin). To extract pertinent information, you need to transform your data and that's what the R statistical computing environment can help you do, including automatically.

Topics covered will include:

  • Introduction to R using RStudio
  • Descriptive statistics
  • Performance visualization
  • Data reduction techniques
  • Multivariate analysis
  • Machine learning techniques
  • Forecasting with R
  • Scalability analysis

Invited talk: Hadoop Super Scaling, Wed Nov 4, 5-6pm

The Hadoop framework is designed to facilitate parallel-processing massive amounts of unstructured data. Originally intended to be the basis of Yahoo's search-engine, it is now open sourced at Apache. Since Hadoop has a broad range of corporate users, a number of companies offer commercial implementations or support for Hadoop.

However, certain aspects of Hadoop performance---especially scalability---are not well understood. One such anomaly is the claimed flat scalability benefit for developing Hadoop applications. Another is that it's possible to achieve faster than parallel processing. In this talk I will explain the source of these anomalies by presenting a consistent method for analyzing Hadoop application scalability.

CMG-T: Capacity and Performance for Newbs and Nerds, Thur Nov 5, 9-11am

In this tutorial I will bust some entrenched myths and develop basic capacity and performance concepts from the ground up. In fact, any performance metric can be boiled down to one of just three metrics. Even if you already know metrics like, throughput and utilization, that's not the most important thing: it's the relationship *between* those metrics that's vital! For example, there are at least three different definitions of utilization. Can you state them? This level of understanding can make a big difference when it comes to solving performance problems or presenting capacity planning results.

Other myths that will get busted along the way include:

  • There is no response-time knee.
  • Throughput is not the same as execution rate.
  • Throughput and latency are not independent metrics.
  • There is no parallel computing.
  • All performance measurements are wrong by definition.

No particular knowledge about capacity and performance management is assumed.

See you in San Antonio!

Monday, August 24, 2015

PDQ Version 6.2.0 Released

PDQ (Pretty Damn Quick) is a FOSS performance analysis tool based on the paradigm of queueing models that can be programmed natively in

This minor release is now available for download.