The official Fatica Labs Blog! RSS 2.0
# Friday, February 15, 2013

In the last post I shown a Publish/Subscribe communication pattern with ZeroMQ and its C# binding library showing an asynchronous way in dispatching messages through many clients. Well this is not the only way we have. Another strategy is to have a listener and many clients sending messages to it and awaiting for response. All this is achieved by changing the Socket type we create.

You can find the code for this example and the previous one here.

 

Here the code for the client:

image

After the connect, we start a loop sending a message to the server, and receiving the reply from it.

Here is the server:

image

Please note the Bind() function, this is the line saying server is listening for calls. Code contains a test, proving messages are handled in sequence even if many client are sending messages in concurrency. This is part of the key point of this communication pattern:

 

Request/Reply key Point:

  • Client pass without error the Connect call, even if the server is not yet listening.
  • The client send call is never blocking.
  • The client Receive() blocks until server reply.
  • It is not possible to Send another message if no reply from server is received.
  • Server is guarantee to process one request at a time ( queue )

So there is a sort of state on the cannel, and we have some feedback about the fact the recipient handle our messages. If you heard about the Saga pattern you probably guess when this scenario can be used.

Friday, February 15, 2013 7:12:15 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] - Trackback
C# | communication | ZeroMQ

# Tuesday, February 12, 2013

Zero MQ is a server less message queuing facility that does not require any additional services to work ( ie no MSMQ needing ). I show here an example in how to use that library from C# by creating a small pub subscribe scenario. Just to clarify is a scenario having a publisher sending some messages and one or more subscriber that will be notified of that message.

You can find all the source code for this example here.

 

In order to get started, we need a wrapper callable for .NET, since ZeroMQ exposes a native interface. I did use clrzmq for the purpose in my project, I did clone my version just to fix some bug in compiling when there is spaces in subdir, so the actual version I use is here.

Then I download the latest stable 3.2.2 RC 2 at the moment I’m writing and launched the setup. The binary after the setup will contain something like this:

image

Notice the naming containing the –vXXX subfix. this is the C++ Runtime Version the library is built against. Pick the correct one for your system ( you must have a Visual C++ runtime installed )  and rename it as libzmq.dll, since the wrapper expect the dll with this name. Alternatively you can download the wrapper via nuget:

PM> Install-Package clrzmq –Pre

In order to just see the example you can just checkout the example repository.

 

The example is divided in two (console) application, a publisher and a receiver. Let’s see the sender below:

image

 

Really simple, the key point are the SocketType.PUB, meaning we use the socket to publish messages, and the Bind in which we decide an address and a protocol where to listen for connections. Then we start sending some rubbish on the created channel. Message are string, but eventually they are byte[]. The Send overload accepting a string is actually a wrapper additional bonus.

Notice that:

  • No server is required in order to dispatch messages
  • Send is not blocking: independently if there is subscriber or not, Send function exits immediately.

Let’s see the the subscriber:

image

the key points here are the SocketType.SUB, the subscriber market, the Connect that must match the publisher protocol/address, and the SubscribeAll ( there is a less eager Subscribe that allow to specify a filter for the messages ).

Notice that:

  • You can start/stop a subscriber at any moment, it will be notified soon of the published messages
  • Messages are not queued at all.

Last point maybe is a little confusing if you expect something like MSMQ: there is no messages buffer somewhere storing non consumed messages ( ie there is no Permanent Subscriptions as in ActiveMQ ), if you want that feature you must implement it externally.

So, a great and simple library, having the simplicity,lightness  no service requirement as a pro, but the drawback of needing the Visual C++ runtime and the leak of a permanent subscription out of the box.

**UPDATE**

I had a chance to test the example codebase here on a Widow7 almost clean machine ( without any VisualStudio in ) and the solution works by XCOPY deploying msvcr100.dll and msvcp100.dll, included into the repository. Taths a great thing and it makes the 0 in the 0MQ being an actual 0 :D

Tuesday, February 12, 2013 3:26:11 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] - Trackback
C# | communication | ZeroMQ

I’ve noticed that sometimes when developer want to embed in they own infrastructure an external library they feel the needing of encapsulate that dependency in a common-versus-tool library. So we have a plethora of, just for say CommonLogging, CommonIoC, CommonOrm and so on, every one claiming the ability of “Just Plug The Tool”.

I would like to try to show why this is a tremendously ugly antipattern in my opinion, by showing the side effects of this approach, and alternative solutions.

 

1.CommonIoC or CommonServiceLocator (squared antipattern) 

The first impressive attempt is trying to insulate the IoC kernel we want to use. So instead on having the application depending on the IoC, we just have the App depending on the CommmonThing, we do some work in creating it, we have back nothing but a boring piece of code to maintain. The spectacular attempt is when the encapsulated guy is the service locator, that guy kill  by its own the entire effort in code decoupling, make component dependency obscure to everyone else but the developer who code the component in the first week. So creating the common layer  just multiply the side effects and the result is a squared antipattern that scares any reasonable professional.

The solution: In this case there is a pattern that is a real silver bullet : Composition Root. Just have the IoC used in a single point and have it’s effect propagating trough all the application by using the proper IoC features: Inject into constructors, use factories and have the IoC effect recursively propagate among all the application without referring directly the kernel. In the root, use all the IoC features you have, by not having the common thing around you, you have no lowest common denominator to deal with. If you want to change the IoC in the future, you completely rewrite this single file, and you will use all the new kernel features. You must avoid using byzantine attributes for injecting your properties, if the IoC you mind to use need them, is not the correct one. A side effect of this approach could be an increasing amount of factories class. While this could be considered a problem some IoC offer some sort of automatic factories, out of the box or with additional plugin. I’m not sure if they help or not, i fear that code became too magic, and this will possibly violate the “Coding For a Violent Psycopaths” principle. As a least consideration: IoC is for applications, it is not for libraries. If you are writing an infrastructure reusable component, you must not base it on any IoC strategy. Things are a little different if you are creating a framework, and this framework leverages and offer services when an IoC exist in the hosting application. Even in this case, instead of creating a common thing, the framework should provide its own IoC abstraction, and the hosting application must provide an adapter to the IoC currently used. The framework abstraction  must be designed in order to fully satisfy the framework requirements, leaving the adapter implementer the challenge its own IoC. A good example of a framework using the described strategy is Caliburn Micro.

 

2.Common logging

Since every good appealing applications logs, this is a real temptation for the CommonLogging beast. Even not considering the fact that the logging libraries used in production are about two, why we need this? Please enumerate how many time you decide to drop a logger in favour of another, end eve if this was the case, was really this refactoring time jeopardizing the project? Another danger coming from the CommonBeast here is the fact that being such a library “too small” to be justified, it can possibly collapse in another anti pattern: “The enterprise magic library”, a library containing a set of more or less useful thing that every developer want to maintain when new stuff has to be added, and no one feel responsible when it breaks.

The solution: Is the same as above: define the logging interface in the library, and let the application implement it. A great example on this comes form NHibernate. NH was strictly bound with log4net till version 3.xxx, starting from there you can implement its own abstract logging interface. Beautiful to have, NH uses log4net as a default by dynamically loading it (  eliminating the needing of having log4net at compile time ) degrading gracefully to the old behavior.  

3.Common OR/M

This is probably the worst. An OR/m already introduce a sensible impedance toward the database, adding an home made abstraction layer pretending to deal wit NH, EF add another multiplying impedance to your application. Writing an application that leverages an OR/M is usually a collection of best practices that OR/M imposes to make the data access as faster as possible. Such trick are usually difficult to abstract and you don’t know a lot of them until you face it.

The solution: First solution is don’t do it at all. it is not a so strange scenario to choose the or/m you want to use in the design phase, and bring it to production and maintenance simply with that or/m. By the way software makes money when it does what the customer want, not when is modified to follow the latest trend in data access. Another less strict solution is to use the “query object pattern”, that does not mean implement your ad hoc query provider, but means encapsulate each aggregates operation in single classes responsible for all the data access, exposing proper and useful methods to refine the query specialized for the aggregate. This allow a conceptually easy refactoring if we want to change the OR/M. And by the way in an every day experience scenario, you mostly don’t swap NH or EF, but more probably you want to change some of your query object to use some micro OR/m  for performance reason.

 

The example series could be extended to the same degree of utility libraries we use. A CommonMessaging the first one comes in mind.The point is that the “CommonThing” is a worst practice, it gives us no advantages but just pain and problems, so use all the effort to avoid it and focalize into the real problem.

Tuesday, February 12, 2013 8:37:14 AM (GMT Standard Time, UTC+00:00)  #    Comments [0] - Trackback
Programming | pattern

# Wednesday, July 25, 2012

I just did some new project at work with heavy and extensive usage of data access over legacy databases, and I tried the DapperDotNet micro or/m instead of NHibernate. I just point before the fact that I've already all the infrastructure to map such a legacy DB in NH by using mapping by code and leveraging a lot of conventios in the DB table/field naming, so the mapping work does not make any difference for me, a part the fact that it is not needed with dapper ( or at least, not needed in the Entity based form ) since you just map the data transfer structures. So what's missing from using NH? Lets see:

  • Inheritance I was a little worried about Dapper leak of support for any kind of inheritance concept, but really I managed to do all the requirement without it, having the best dsl for querying the database did the work.
  • Identity Map We have to keep an eye to the fact the identity map does not exist anymore using a micro/orm. This not just in subsequnt queries in the same section, but when we load associations, expecially when the associated class has a lot of data. For example I had an association with an entity containing a big bounch of xml, if I load that association in a dto, I need to manage myself to load it just when the associated id changes.
  • Lazy Collections using Dapper we have to forget such automatic features, basically there is not such a concept, but I really can live without it.
  • Db Schema Create/Update I really miss that just in unit testing. You have to craft the schema by hand in your unit test. In production in my case I have no control for the schema generation *at all* so it is not a problem anyway, but I guess the NH update / generation is not enough for a real DB deployment. You probably need a DB migration in any case.
  • Linq/Hql In fact I miss LinqToNh. Not absolutely Hql. But we have to consider that a big portion of the impedence an OR/M introduces is caused to the creation of a DSL on top of plain SQL.

Let's consider the pure benefit we have from Dapper:

  • Any kind of optimized SQL is easy to submit.
  • Calling an SP handling In/out parametrs is simple as calling a query
  • Multiple resultset are easy to handle ( The Future<> in NH )
  • Bulk operations are easy too ( you still need real bulk if you realaly want to insert big amount of data )
  • Really noticeable increase in performance, due to smart ADO.NET underlayng access and to the fact we control the SQL roundtrip ourself )

So in my opinion: we probably code a little more in the data access phase, but we have more control, there is no a separate "mapping" part, that can be not so easy to mantain, but it really worth the effort to move definitely in the Micro Orm direction.

Wednesday, July 25, 2012 11:32:20 AM (GMT Daylight Time, UTC+01:00)  #    Comments [1] - Trackback
C# | NHibernate | ORM

# Friday, July 20, 2012

As announced by Scott Guthrie EF is today available as Open Source on Codeplex. As usual I had a first glance at the code to see what's inside. Is a big codebase as you can guess,but even with a first sight it is possible to spot some interesting things to learn. Here my list:

So nothing really complex, just good code snippets. Interesting, they internally uses XUnit for unit testing, not MSTest, and the framework for mocking is MoQ.

Friday, July 20, 2012 10:31:10 AM (GMT Daylight Time, UTC+01:00)  #    Comments [0] - Trackback
C# | CodeProject

# Monday, June 25, 2012

I recently started playing with stackexchange data. You can use that web app to look up existing queries created by other users or create your own, against any stackexchange site. If you are asking how you can obtain, for example, how many time a certain tag is viewed as a part of a question  to discover what’s ‘trendy’ if you think that the number of questions and aswers ( and the views count ) are meaningful by this point of view. If you think this is interesting you can leverage stackexchange data to extract almost whatever you want. Here below some example:

top 20 ‘Trending’ Tags in the last 30 days

image

Or if you are curious about your position in your country in term of reputation you can modify this query:

top 20 users classified by reputation in Italy

image

Monday, June 25, 2012 10:16:50 AM (GMT Daylight Time, UTC+01:00)  #    Comments [0] - Trackback


My Stack Overflow
Contacts

Send mail to the author(s) E-mail

Tags
profile for Felice Pollano at Stack Overflow, Q&A for professional and enthusiast programmers
About the author/Disclaimer

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2014
Felice Pollano
Sign In
Statistics
Total Posts: 157
This Year: 0
This Month: 0
This Week: 0
Comments: 123
This blog visits
All Content © 2014, Felice Pollano
DasBlog theme 'Business' created by Christoph De Baene (delarou) and modified by Felice Pollano