Previously, I wrote an article about how I got triggered after somebody on Reddit exclaimed that WCF was faster had lower response times than ASP.NET Web API and ASP.NET Core MVC.

The outcome was that, yes, with the default configuration, WCF does have lower response times. When you start tweaking with the serializers, both ASP.NET Web API and ASP.NET Core MVC come out on top.

The article kicked up a bit of a storm on Reddit, though, with people complaining that the comparison wasn’t fair, that I should have done this or that. Somebody even wrote an entire blog post about how it should be done. I wanted to address some of those comments.

Expectations

In my original article, I did not explain my intentions very well. It was not meant as a be-all-end-all determination of whether to use WCF or not. There are very valid reasons for using or not using WCF, besides the performance. I was simply trying to figure out whether WCF could have better per-request latency than the existing ASP.NET offerings.

Latency doesn’t mean better performance

Several people pointed out that performance isn’t just about latency. Specifically ASP.NET Core and Kestrel have been designed to be able to handle many requests concurrently, probably at the cost of some latency. I definitely agree with the statement, but my post was only about latency, because it is still an important metric in the performance of your service. It also happens to be one of the easier ones to measure.

You’re doing unnecessary in-place computations

Josh Bartley wrote an entire article in reply to mine. Thanks for taking the time to do that, Josh.

In the article, he mentions that I’m doing reflection in every call to construct the URI for the Web API/ASP.NET Core requests. He makes a fair point, along with some other ones, like how I was using delegates to switch between implementations. I’m not sure how much that specifically affects the numbers, but it sure is ugly. I rewrote all of the benchmarks to use a type hierarchy instead of big ugly switch statements, and I made sure to only use reflection once, during the initialization. Let’s see what has changed.

Method ItemCount Mean
LargeAspNetCoreMessagePackFuncs 100 9 017,7 μs
LargeAspNetCoreMessagePackHttpClientAsync 100 8 882,4 μs

There’s a very small improvement, but it’s not very impressive. I think it’s safe to say that reflecion and using delegates was not really an issue.

Asynchronous is killing the numbers

Redditor Langebein remarked the following:

You’re doing PostAsync in the WebApi tests, while doing synchronous calls with WCF. I'd wager a lot of the difference is made up from starting tasks and spinning up threads with the fancy HttpClient.

That’s an interesting point. With my new code structure in place, I could easily add other types of clients. First, let’s see what happens if we don’t await everything.

Method ItemCount Mean
LargeAspNetCoreMessagePackHttpClient 100 8 699,5 μs
LargeAspNetCoreMessagePackHttpClientAsync 100 8 882,4 μs

That’s a significant difference already. But Langebein was talking about HttpClient as a whole. So let’s see what happens when we use good ol’ HttpWebRequest.

Method ItemCount Mean
LargeWebApiMessagePackHttpClient 100 9 292,9 μs
LargeWebApiMessagePackHttpClientAsync 100 9 372,5 μs
LargeWebApiMessagePackHttpWebRequest 100 7 023,7 μs
LargeAspNetCoreMessagePackHttpClient 100 8 699,5 μs
LargeAspNetCoreMessagePackHttpClientAsync 100 8 882,4 μs
LargeAspNetCoreMessagePackHttpWebRequest 100 6 347,1 μs
LargeWcfText 100 10 306,5 μs

That’s a pretty big win. For all its fanciness, HttpClient is adding a lot of overhead, possibly related to its asynchronicity. For reference, I’ve added the fastest WCF result from the earlier post, and there is an undeniable advantage to ASP.NET Core and MessagePack. ASP.NET Web API does pretty well, too.

Since we’ve established that using HttpWebRequest offers the lowest latency, let’s establish a new baseline.

Method ItemCount Mean
LargeWcfText 100 10 306,5 μs
LargeWcfWebXml 100 10 125,9 μs
LargeWcfWebJson 100 12 535,9 μs
LargeWebApiJsonNetHttpWebRequest 100 14 204,2 μs
LargeWebApiMessagePackHttpWebRequest 100 7 023,7 μs
LargeWebApiXmlHttpWebRequest 100 12 461,1 μs
LargeWebApiUtf8JsonHttpWebRequest 100 11 023,6 μs
LargeAspNetCoreJsonNetHttpWebRequest 100 18 784,1 μs
LargeAspNetCoreMessagePackHttpWebRequest 100 6 347,1 μs
LargeAspNetCoreXmlHttpWebRequest 100 20 484,6 μs
LargeAspNetCoreUtf8JsonHttpWebRequest 100 9 944,2 μs

Now we can see MessagePack and Utf8Json (on ASP.NET Core MVC) easily overtaking WCF, and even XML on ASP.NET Core MVC is getting close.

What about binary serialization for WCF? And raw TCP?

Several people commented that I should have included more WCF options, such as binary serialization and the revered NetTcpBinding. Binary serialization was simply something I forgot about after a couple of years of not doing WCF.

NetTcpBinding is not something I think is fair to compare with. There is just an overhead to using HTTP that the ASP.NET Core and Web API tests cannot avoid. I decided to include it for argument’s sake.

Interestingly enough, the way I was invoking the WCF client meant I was creating a new client for each call. For the HTTP bindings, this was not an issue, because they’re handing off connection management to the HTTP library. NetTcpBinding has to do it by itself. The way I was doing it, I was never actually releasing clients, which meant sockets would stay open forever. I tried benchmarking it, but it never even got past the ‘pilot’ phase, where BenchmarkDotNet tries to determine the optimum number of invocations per run.

After I’d modified the code to create a client and immediately release it afterwards, things were a little better. A little better. Now I would actually reach the ‘main’ phase, but eventually, the process would either halt or throw an exception, because it was occupying sockets so quickly that they weren’t getting released in time.

Eventually I had to settle for using a single client instance for the lifetime of the benchmark. Unsurprisingly, the results are very good.

Method ItemCount Mean
LargeWcfText 100 10 306,5 μs
LargeWcfWebXml 100 10 125,9 μs
LargeWcfWebJson 100 12 535,9 μs
LargeWcfNetTcp 100 5 469,9 μs
LargeAspNetCoreMessagePackHttpWebRequest 100 6 347,1 μs

Binary serialization was a fair option, that I’d just forgotten to include. It was quickly included, and it is definitely the fastest out-of-box serialization option for WCF. Still, MessagePack on ASP.NET Core MVC has an ever so slightly lower response time.

Method ItemCount Mean
LargeWcfText 100 10 306,5 μs
LargeWcfWebXml 100 10 125,9 μs
LargeWcfWebJson 100 12 535,9 μs
LargeWcfBinary 100 6 889,3 μs
LargeWcfNetTcp 100 5 469,9 μs
LargeAspNetCoreMessagePackHttpWebRequest 100 6 347,1 μs

How about a comparison between Full Framework and .NET Core?

I very much wanted to include this, but I kept running into issues with BenchmarkDotNet, which either failed half-way, or didn’t report all of the metrics.

There’s no fair comparison between WCF and the others!

Redditor IAmVerySmarter (who, incidentally, also suggested using NetTcpBinding) complained that, because there was no benchmark between WCF and ASP.NET Core MVC or Web API using the MessagePack, it wasn’t fair to compare.

Unfortunately, the only package I could find to use MessagePack with WCF is MsgPack.Wcf, which uses MsgPack.Cli instead of MessagePack. According to the benchmarks, MsgPack.Cli takes a lot more time to serialize and deserialize than MessagePack. It’s still a fair comparison, so let’s see.

Method ItemCount Mean
LargeWcfText 100 10 306,5 μs
LargeWcfBinary 100 6 889,3 μs
LargeWcfMsgPackCli 100 22 791,3 μs
LargeAspNetCoreMessagePackHttpWebRequest 100 6 347,1 μs
LargeWebApiMsgPackCliHttpWebRequest 100 28 510,5 μs
LargeAspNetCoreMsgPackCliHttpWebRequest 100 26 311,1 μs

As you can see, MsgPack.Cli on either ASP.NET Core MVC or ASP.NET Web API takes several times longer than MessagePack. In fact, it takes longer than any other serializer tested. However, MsgPack.Cli on WCF takes a bit less time than on ASP.NET Core or ASP.NET Web API. This might indicate that the underlying infrastructure for WCF is the faster one.

Can you add [framework/serializer] to the comparison?

I could, yes. On the other hand, the source code is right there, so you could also fiddle around with it yourself.

ZeroFormatter

One serializer that I’ve seen mentioned several times is ZeroFormatter. This serializer claims to provide ‘infinitely fast deserialization’. That’s a bold claim, even from its creator, neuecc, who’s also responsible for the excellently performing MessagePack and Utf8Json libraries.

Method ItemCount Mean
LargeWebApiMessagePackHttpWebRequest 100 7 023,7 μs
LargeAspNetCoreMessagePackHttpWebRequest 100 6 347,1 μs
LargeWebApiZeroFormatterHttpWebRequest 100 7 998,7 μs
LargeAspNetCoreZeroFormatterHttpWebRequest 100 6 888,7 μs

It does pretty well, but it’s not quite as quick as MessagePack. Note that this test consists of serializing, then deserializing, followed by serializing some other stuff, and then finally deserializing that again. Deserializing might be ‘infinitely fast’, but serializing apparently is not.

Benchmark code

In response to the previous article, I got a lot of questions from people who wanted to see the code I used to benchmark. There was a link hidden somewhere in the article, so let me now make it clear:

Here is the benchmark code: GitHub