The official Fatica Labs Blog! RSS 2.0
# Friday, 20 July 2012

As announced by Scott Guthrie EF is today available as Open Source on Codeplex. As usual I had a first glance at the code to see what's inside. Is a big codebase as you can guess,but even with a first sight it is possible to spot some interesting things to learn. Here my list:

So nothing really complex, just good code snippets. Interesting, they internally uses XUnit for unit testing, not MSTest, and the framework for mocking is MoQ.

Friday, 20 July 2012 10:31:10 (GMT Daylight Time, UTC+01:00)  #    Comments [0] - Trackback
C# | CodeProject

# Thursday, 12 April 2012

There are scenarios in which NHibernate performance decrease even if we do all the effort to correctly use it. This could happen if we need in some circumstances to load a lot of record ( I explicitly use record instead of ‘Entity’ ) from some relational structures, and doing this the OR/M way means overload the session with a lot of entities, that is painful in term of speed. Other cases happens when we need to  write or update something that is not properly represented in the entity model we have, maybe because the model is more “read” oriented. Other cases? I’m not able to grasp all  of course, but I’m sure that you face some if you use an OR/M ( not necessarily NH ) in your daily basis. Using NHibernate an alternative could be using FlushMode=Never in session, but you still have all the OR/M plumbing in the hydrating entity code that negatively impacts the performances. I obtained impressive results in solving such a situation, by using Dapper, a so called single file OR/M. It is a single file that provider some IDbConnection extension methods, those methods works on an already opened connection, so we can use the connection sticked to the NHibernate open session, as here below:

// don't get confused by LinqToNh Query<> this one is the Dapper query
// acting on the CONNECTION :)

session.Connection.Query<MyDto>("select Name=t.Name,Mail=t.Mail from mytable t where t.Valid=@Valid",new{Valid=true});




you obtain back a big recordset of MyDto instances in almost the same time if you wire by hand a DateReader vertical on the dto, with all the error checking.

So why don’t use it always?

Because despite the name Dapper is not an OR/M, it does not keep track of modified entities, it does not help you in paginating results or lazy load the entity graph, neither helps in porting from one SQL dialect to another.

Is this strategy used somewhere else?

You probably find interesting to read this post by Sam Saffron, this solution is used in Stackoverflow.com combined with the LinqToSql OR/M to help when the OR/M performance are not enough.

By my test I experienced a performance increase of 10x in a very hacking situation, but I can’t show the case since it is not public code. Something more scientific about performance is here.

Thursday, 12 April 2012 09:41:12 (GMT Daylight Time, UTC+01:00)  #    Comments [0] - Trackback
CodeProject | Dapper | NHibernate | ORM

# Wednesday, 21 March 2012

Some days ago I came in this issue regarding thread non-safety when using MoQ. So I simple create my own fork on GitHub and solved the issue, that was really easy to do, and as a result I obtained a Mock stable even when mocked methods are called from multiple threads. I created a bounch of test to prove that worked, here below one as a sample:

clip_image001

So with this I made the Mock thread safe at the infrastructure level, that mean no more strange NullreferenceException and others. But what if we want our mock not thread safe? I mean there could be situation in which we want to ensure the system under test calls a certain method from a single thread,  in other word we want the mock to explicitly require single thread access to certain methods. This could happen for example when we are mocking some UI components, but there is such situations every time the object we are mocking is intrinsically non thread safe and the SUT is multithreaded. So I extended the MoQ fluent language from the internal and I obtain something like…  the example below:

moqts

So in the setup phase we declare a method ( or a setter, or a getter, as usual ) to be SingleThread(). This yield a mock throwing when the method is called from a different thread from the one which did the setup.

If you are happy with this modifications ( you would for sure find helpful the thread safety by its own ) feel free to check out my code fork on GitHub, in any case I’m trying to have that modification pulled from the main stream.

Wednesday, 21 March 2012 15:53:26 (GMT Standard Time, UTC+00:00)  #    Comments [0] - Trackback
C# | CodeProject | MoQ

# Thursday, 15 March 2012

I’ve already talked about the simple contract checking library here. I did a little improvement that allow us to write such a code:

public void DoSomethingWithCustomMessage(string arg1)
        {
            Contract.Expect(() => arg1)
                .Throw<Exception>("I'm unhappy to see that {0} is null")
                .WhenViolate
                .IsNotNull();
        }

 

Note than {0} is replaced with the name of the argument so the library is refactoring friendly and DRY. The only restriction is that the exception type must support a constructor with a single string argument. Obviously .Throw can be used fluently after each all Expect(). As another minor improvement, even the .Expect can be called now fluently more than once, so multiple check can be done fluently too. The entire repository is on Bitbucket here. In order to use the library you just have to include the single file, but is better to point the repository since the file version can change in subsequent releases.

Thursday, 15 March 2012 16:16:42 (GMT Standard Time, UTC+00:00)  #    Comments [0] - Trackback
C# | CodeProject

# Wednesday, 29 February 2012

In this post I will show how to make testable something that ( at least me ) usually left as untested. I’m talking about the preparing phase of a console app, the checking arguments error reporting and so on. That logic is usually so simple that any good cow boy programmer would probably leave outside any unit testing. Unfortunately we should at least do some manually check that prove that logic working, and doing things manually is always silly. In this post we assume we have a working command line parsing library, and a mocking framework. Let see the cowboy code:

        static void Main(string[] args)
        {
            string optA,optB;
            optA = optB = null;
            bool done = false;
            OptionSet set = new OptionSet();
            set.Add("a=", (k) => optA = k);
            set.Add("b=", (k) => optB = k);
            set.Add("h", (k) => { LongHelp(); done = true; });
            set.Parse(args);
            if (done)
                return;
            if (string.IsNullOrEmpty(optA) || string.IsNullOrEmpty(optB))
            {
                ShortHelp();
                return;
            }
            DoTheJob(optA,optB);
            
        }

        private static void DoTheJob(string optA, string optB)
        {
            //something interesting here
        }

        private static void LongHelp()
        {
            Console.Error.WriteLine("Long help here...");
        }

        private static void ShortHelp()
        {
            Console.Error.WriteLine("Short help here");
        }
    }

So nothing special, the example is actually very simple, we have two mandatory parameters, a command line switch to print a long help. If one argument is missing a short help line must be presented. If all the parameters are provided, the DoTheJob() method should be called with the correct values.

Current code is not testable without hosting the console application as a process, and looking at the stdout to see what happen. Even by this strategy, we can not punctually check what is passed to DoTheJob. So we want to refactor the code, without adding any complexity to the app. So here below the proposed refactoring:

    public class Program
    {
        static void Main(string[] args)
        {
            new Program().Run(args);
        }
        public virtual void Run(string[] args)
        {
            string optA, optB;
            optA = optB = null;
            bool done = false;
            OptionSet set = new OptionSet();
            set.Add("a=", (k) => optA = k);
            set.Add("b=", (k) => optB = k);
            set.Add("h", (k) => { LongHelp(); done = true; });
            set.Parse(args);
            if (done)
                return;
            if (string.IsNullOrEmpty(optA) || string.IsNullOrEmpty(optB))
            {
                ShortHelp();
                return;
            }
            DoTheJob(optA, optB);

        }

        public virtual void DoTheJob(string optA, string optB)
        {
            //something interesting here
        }

        public virtual void LongHelp()
        {
            Console.Error.WriteLine("Long help here...");
        }

        public virtual void ShortHelp()
        {
            Console.Error.WriteLine("Short help here");
        }
    }

 

So pretty easy, we provide a non static method Run(), and all the internal function are declared virtual. This is a five minutes modification we could probably apply to any other code like this we have. The difference is that we can write some unit test, lets see how:

        [TestMethod]
        public void ShouldDisplayShortHelp()
        {
            var moq = new Mock();
            moq.CallBase = true;
            moq.Setup(k=>k.DoTheJob(It.IsAny(),It.IsAny()))
                .Throws(new InvalidProgramException("Should not call"));
            moq.Object.Run(new string[0]);
            moq.Verify(k => k.ShortHelp());
        }
        [TestMethod]
        public void ShouldDisplayLongHelp()
        {
            var moq = new Mock();
            moq.CallBase = true;
            moq.Setup(k => k.DoTheJob(It.IsAny(), It.IsAny()))
                .Throws(new InvalidProgramException("Should not call"));
            moq.Object.Run(new string[]{"-h"});
            moq.Verify(k => k.LongHelp());
        }
        [TestMethod]
        public void ShouldInvokeWithProperParameters()
        {
            var moq = new Mock();
            moq.CallBase = true;
            moq.Setup(k => k.DoTheJob("p1", "p2")).Verifiable();
            moq.Object.Run(new string[] { "-a=p1","-b=p2" });
            moq.Verify();
        }

 

I used the MoQ library, please note the Callbase set to true, because we are using the same object for driving and for expect calls. So in conclusion, we achieve a real unit test of something we sometimes left apart, we did that in memory, and even if the example is really trivial, the concept can be used in complex scenarios too. What about testing the inside part of DoTheJob()? well, if a good testing strategy is used, the internal part should be testable outside somewhere else, here we are  proving we can test the shell. 

Wednesday, 29 February 2012 21:47:36 (GMT Standard Time, UTC+00:00)  #    Comments [0] - Trackback
CodeProject | CSharp | Programmin

# Tuesday, 28 February 2012

This is a post for myself, because I’m much more a cow boy programmer, and I know this is bad, even if sometimes results are apparently good. The leaking part in my code are usually unit tests, and there is no excuse about that, since it is necessary to change the approach on its root, for example writing the (failing) test before the real code. This is actually a design strategy, that eventually improve the reliability of the software and the portability of the software against other developers. The hardest part is that sometimes the boundary between unit testing and integration testing is apparently very thin: making that boundary better defined is part of the design process. I found this document, and I think is very useful to understand the common errors producing a non testable code, and the strategy to avoid them.

Tuesday, 28 February 2012 21:13:39 (GMT Standard Time, UTC+00:00)  #    Comments [0] - Trackback
CodeProject | Programmin

# Monday, 23 January 2012

… without writing a LinqToSomething provider, of course. The Expression.<Func<T>> construction is sometimes a little frightening since we suppose to have to write some complex tree navigation in order to achieve the expression behavior, but this is not always true, there is scenarios in which we can use it without any complex tree visit. In this post we will see some real world examples using this strategy.

1) INotifyPropertyChanged without “magic strings”

This interface is implemented in its simplest form:

public string CustomerName
{
   get
   {
	   return this.customerNameValue;
   }
   set
   {
	   if (value != this.customerNameValue)
	   {
		   this.customerNameValue = value;
		   NotifyPropertyChanged("CustomerName");
	   }
   }
}

We can leverage Linq.Expression here by this simple base class:

class PropertyChangeBase: INotifyPropertyChanged
{
	protected void SignalChanged<T>(Expression<Func<T>> exp)
	{
		if (exp.Body.NodeType == ExpressionType.MemberAccess)
		{
			var name = (exp.Body as MemberExpression).Member.Name;
			PropertyChanged(this, new PropertyChangedEventArgs(name));
		}
	   else
		   throw new Exception("Unexpected expression");
   }
   #region INotifyPropertyChanged Members
   public event PropertyChangedEventHandler PropertyChanged = delegate { };
   #endregion
}

By deriving our class from this one, we can easily notify a property change by writing:

SignalChanged(()=>CustomerName);


This allow us to leverage intellisense, and it is refactoring friendly, so we can change the name of our property without pain. The first project I seen using this technique was Caliburn Micro, but I’m not sure is the only one and the first. Same technique is used here to test the INotifyPropertyChange behavior.

2) Argument Verification

Really similar to the problem above, we want to avoid:

static int DivideByTwo(int num) 
{
   // If num is an odd number, throw an ArgumentException.
   if ((num & 1) == 1)
	   throw new ArgumentException("Number must be even", "num");

   // num is even, return half of its value.
   return num / 2;
}


In this case we are typing NUM, that is the name of the argument, as a literal string which is bad. We would preferably write something like this:

public void DoSomething(int arg1)
{
	Contract.Expect(() => arg1)
       .IsGreatherThan(0)
       .IsLessThan(100);
;
}

That again give us intellisense and refactoring awareness. You can find he code for this helper class here, and a brief description in this post.

3) The MoQ mocking library

The MoQ library is a .NET library for creating mock objects easy to use that internally leverage Linq.Expression to achieve such a readable syntax:

   mock.Setup(framework => framework.DownloadExists("2.0.0.0"))
       .Returns(true)
       .AtMostOnce();

4) A generic Swap function:

The simplest way in creating a generic Swap function in c# is:

void Swap<T>(ref T a, ref T b)
{
   T temp = a;
   a = b;
   b = temp;
}

Unfortunately, this won’t work if we want swap two property of an object, or two elements of an array. We would like to write something like this:

   var t = new Test_() { X = 0, Y = 1 };
   Swapper.Swap(() => t.X, () => t.Y);
   Assert.AreEqual(0, t.Y);
   Assert.AreEqual(1, t.X);

or with arrays:

    int[] array = new[] { 1, 2, 3, 4, 5, 6, 7, 8, 9 };
    Swapper.Swap(() => array[0], () => array[1]);
    Assert.AreEqual(2, array[0]);
    Assert.AreEqual(1, array[1]);

We can achieve this by a simple helper class using Linq.Expression:

public class Swapper
{
        public static void Swap(Expression<Func<T>> left, Expression<Func>T>> right)
        {
            var lvalue = left.Compile()();
            var rvalue = right.Compile()();
            switch (left.Body.NodeType)
            {
              case ExpressionType.ArrayIndex:
                  var binaryExp = left.Body as BinaryExpression;
                  AssignTo(rvalue, binaryExp);
                  break;

              case ExpressionType.Call:
                  var methodCall = left.Body as MethodCallExpression;
                  AssignTo(rvalue, methodCall);
                  break;
				  
              default:
                  AssignTo(left, rvalue);
                  break;
          }

          switch (right.Body.NodeType)
          {
              case ExpressionType.ArrayIndex:
                  var binaryExp = right.Body as BinaryExpression;
                  AssignTo(lvalue, binaryExp);
                  break;

              case ExpressionType.Call:
                  var methodCall = right.Body as MethodCallExpression;
                  AssignTo(lvalue, methodCall);
                  break;

              default:
                  AssignTo(right, lvalue);
                  break;
          }
      }

      private static void AssignTo<T>(T value, MethodCallExpression methodCall)
      {
          var setter = GetSetMethodInfo(methodCall.Method.DeclaringType,methodCall.Method.Name);
          Expression.Lambda<action>(
              Expression.Call(methodCall.Object, setter, Join(methodCall.Arguments, Expression.Constant(value)))
          ).Compile()();
      }

      private static Expression[] Join(ReadOnlyCollection<expression> args,Expression exp)
      {
          List<expression> exps = new List<expression>();
          exps.AddRange(args);
          exps.Add(exp);
          return exps.ToArray();
      }

      private static MethodInfo GetSetMethodInfo(Type target, string name)
      {
          var setName = Regex.Replace(name, "get", new MatchEvaluator((m) =>
          {
              return m.Value.StartsWith("g")?"set":"Set";
          })
          ,RegexOptions.IgnoreCase);
          var setter = target.GetMethod(setName);
          if (null == setter)
          {
              throw new Exception("can't find an expected method named:" + setName);
          }
          return setter;
      }

      private static void AssignTo<T>(Expression<Func<T>> left, T value)
      {
          Expression.Lambda<Func<T>>(Expression.Assign(left.Body, Expression.Constant(value))).Compile()();
      }

      private static void AssignTo<T>(T value, BinaryExpression binaryExp)
      {
          Expression.Lambda<Func<T>>(Expression.Assign(Expression.ArrayAccess(binaryExp.Left, binaryExp.Right), Expression.Constant(value))).Compile()();
      }
  }

This code leverages a samples by Takeshi Kiriya, I just added the ability in handling array to his own the original code.

5) Unit testing the presence of an attribute

Thomas Ardal talks in this post about how to easily unit test the presence of an attribute on a method of a class,  useful for example in MVC scenarios, or in others AOP circumstances.

A test leveraging his strategy is written as below:

    var controller = new HomeController();
    controller.ShouldHave(x => x.Index(), typeof(AuthorizeAttribute));

So we show five different simple application, I hope you find here some inspiration for your works, and feel free to write about your own ideas and enrich the list.

Monday, 23 January 2012 16:05:16 (GMT Standard Time, UTC+00:00)  #    Comments [0] - Trackback
C# | CodeProject | Linq

# Monday, 16 January 2012

Here below a list of tools and libraries I consider necessary to carry on my USB key in order to be operative everywhere in a very little time:

  1. SharpDevelop
  2. NHibernate
  3. Caliburn(Micro)
  4. NInject
  5. Kaxaml
  6. SQLite
  7. Rad Software Regular Expression Designer
  8. ILSpy
  9. FlyFetch
  10. log4net

SharpDevelop

Is probably the single OS replacement for MS Visual Studio. Install and start to using it in term of minutes thanks to xcopy deploy. It reads projects in the same format of the original one ( since it uses thestandard framework libraries for reading/writing projects ).

NHibernate

If you can see a way to model the DB you want to use, then NH is probably the best OR/M existing in the .NET environment. As soon you have some confidence with it, it is very easy to start modeling our objects, expecially with the 3.2.x version that does not require anymore to write hbms.

Caliburn Micro

If you write UI using some XAML dialect ( WPF/SILVERLIGHT/WP7/ the new coming Win8 ) and you like MVVM, you have to look at it. Very easy to boostrap, with coroutine support embedded, I would like to use it even for an Hello World application Smile

NInject

An easy to learn DI framework. Easy and very intuitive to configure, it has some function to allow multiple components to be injected as array, and to configure dependencies from external modules. I choose it not only, but also for the wonderful home page Smile

Kaxaml

A pad to learn and test XAML, with intellisense and preview as you type. Like xamlpad, but much better.

SQlite

An embedded file based database. It handles concurrent access consistently, easy to interface with NHibernate. Unfortunately it is a native solution, so it works only in fully trusted environments.

Rad Software Regular Expression Designer

there is a lot of regex testing tool, but this is the one I use, so…

ILSPy

The open source replacement for reflector, It comes from the same team who create SharpDevelop. It has all the features the standard reflector has, but not yet a real plugin environment.

FlyFetch

Is the tool I use when I need to display in UI a very long recordset, and I want to page it without rewrite every day the same code.

log4net

To use in all application, even the simplest: logManager.GetLogger(GetType()).Info(“Hello World”); Smile It is probably the .NET logger existing from the early days, with a lot of appenders already written and tested.

 

So this is my list, of course, another survival pre condition is having an internet access, and the StackOverflow help Smile. There is no NUnit nor a Mocking library ( as for example, Moq) since both can be replaced by custom test and mocks, but of course, if there is still place on the USB Winking smile

Monday, 16 January 2012 19:39:06 (GMT Standard Time, UTC+00:00)  #    Comments [0] - Trackback
CodeProject | Programming

# Wednesday, 11 January 2012

In the WP7 library there is an interesting utility class: CivicAddressResolver. This class should help us in doing the so called Reverse GeoCoding: given a coordinate in term of latitude and longitude we want a readable address near to that place. Unfortunately there is a bad surprise: as we read in the documentation, “this method is not implemented in the current release”. So what if we need something like this, waiting for the fully fledged implementation? Since the class implements the interface ICivicAddressResolver, we can provide our own implementation, for example based on google maps geocoding api. So I created a little project and a demo application. The main class implementing the resolver is GMapCivicAddressResolver.AddressResolver. You can use it in an application awaiting for the definitive implementation, with the limitation that this implementation returns something meaningful just in the field CivicAddress.AddressLine1. Another limit is that you can’t call the blocking version of the resolve method,in any case this should not be a problem since the asynchronous call is the one to prefer.  Please check out the project here on Bitbucket. Here below a screenshot of the running app, showing a totally random address in Rome:

cv1

Wednesday, 11 January 2012 21:00:53 (GMT Standard Time, UTC+00:00)  #    Comments [0] - Trackback
CodeProject | WindowsPhone

# Saturday, 17 December 2011

I would like to present here a little argument verification library that does not require you to type any string for specifying the name of the parameter you are checking. This lets the library faster to use, not intrusive in the actual method code, and refactor friendly. As a bonus you can use it by just embedding a single file. We can see below an example, just to get immediately to the point:

As we can see, there is no magic string at all. All the argument name are guessed thanks to the metadata contained in the linq Expression we use. For example the method at line 14 if called with a null value will report:

Value cannot be null.
Parameter name: arg1

The same happens to the more complex check we do at line 46, when we write:

Contract.Expect(() => array).Meet(a => a.Length > 0 && a.First() == 0);

We have a complex predicate do meet, described by a lambda, standing that the input array should have first element zero, and non zero length. Notice that the name of the parameter is array, but we need to use another name for the argument of the lambda ( in this case I used ‘a’ ), the library is smart enough to understand that ‘a’ actually refers to array, and the error message will report it correctly if the condition does not meet. Just to clarify, the message in case of failure would be:

Precondition not verified:((array.First() == 0) AndAlso (ArrayLength(array) > 1))
Parameter name: array

Well it is not supposed to be a message to an end real user, it is a programmer friendly message, but such validation error are supposed to be reported to a developer ( an end user should not see method validation errors at all, should he ? )

Well Meet is a cutting edge function we can use for complex validations. Out of the box, for simpler cases we have some functions too, as we can see on the IContract interface definition:

An interesting portion of the codebase proposed is the one renaming the parameter on the lambda expression, to achieve the reported message reflect the correct offending parameter. It is not so easy because plain string replacement would not work:we can have a parameter named ‘a’, seen in any place in the expression string representation and a plain replacement would resolve in a big mess, furthermore Expressions are immutable. So I found help on StackOverflow, and a reply to this question solved the problem, let see the “Renamer” at work ( Thanks to Phil ):

Basically is a reusable class that take the new name of the parameter and returns a copy of the input expression with the (single) argument changed.

To improve the library or just use it, please follow/check out the project on Bitbucket, suggestions and comments are always welcome.

Saturday, 17 December 2011 13:24:25 (GMT Standard Time, UTC+00:00)  #    Comments [0] - Trackback
CodeProject | CSharp | Linq | Recipes

# Friday, 09 December 2011

It is easy to interface the Google data API by using the library Google  supply, for .NET too. Let’s have an example of acceding the contacts information:

 

This class requires the following references:

  • Google.GData.Client
  • Google.GData.Contacts
  • Google.GData.Extensions

All these are available in precompiled form after installing the Google Data API setup. Of course the complete API contains method to interact with a loot of good things in addition:

  • Blogger
  • Calendar
  • Calendar Resource
  • Code Search
  • Contacts
  • Content API for Shopping
  • Documents List
  • Email Audit
  • Email Settings
  • Google Analytics
  • Google Apps Provisioning
  • Google Health
  • Google Webmaster Tools
  • Notebook
  • Picasa Web Albums
  • Spreadsheets
  • YouTube

The only missing point: there is not (yet) a version for WP7, and the current codebase is not easy to port. Another missing point is that the API does not support OAuth2, that is indeed supported by the Google platform itself.

Friday, 09 December 2011 12:34:07 (GMT Standard Time, UTC+00:00)  #    Comments [0] - Trackback
CodeProject | CSharp | google-api

# Saturday, 03 December 2011

Since I had an amazing number of views on my previous article about my chess engine rewriting and publishing it OS, I decided to extend a little bit more the discussion. Unfortunately this is not a brand new argument, since there is a lot of good articles on the web, but in order to me some missing point exists: if you start reading the code of a fully fledged engine, even in C#, you will probably get lost in a big mesh of heuristics and optimizations without really get what’s really happens. By contrary, if you read the literature you will find a lot of pseudo code but nothing really working, and something that is a detail for the pseudo code, can be really difficult to implement in real life just to see what’s happens. Here we will show how a plain algorithm from the literature behave in it’s essence, solving a real chess problem. Of course this will not works in a real playable engine but it has a big advantage: it is *understandable* and can be the starting point to optimize, so by gradually reaching the fully fledged engine we eventually get each single steps better.

Which algorithm use ? Chess engines uses some flavor of an algorithm called MiniMax, with an immediately ( even for a simply case ) necessary optimization called Alpha Beta Pruning. This is what we will show by example here below. So what exactly is MiniMax ? It is an algorithm that works by producing a tree of the possible games in which each node is a possible status and each arc that produce the transaction is the move ( the decision ) the player can do. At each node we can weight the result of the player Mini and the player Max, Mini win if that value is little, and Max win when the value is high, so Mini want to *minimize* a score function, and Max want to maximize it. Since chess is a symmetric game, we can say that a good result for Mini is a bad result for Max and vice-versa. This lead us to a single evaluating function, with sign changed depending on the player. This simplification is referred in literature as Negamax.  Lets see an example of a game tree, by starting from a specific chess position (2rr3k/pp3pp1/1nnqbN1p/3pN3/2pP4/2P3Q1/PPB4P/R4RK1 w - - 0 0):

     image

The position is our root node, and a portion of the resulting tree is:

image

Well it is a portion, its impossible to draw it all even for just a few play, it is even impossible computationally enumerate all nodes eve for a few ply, because of the high branching factor chess has. The branching factor is a measure on how many nodes are generated from a root, in other word, in chess is an average count of the possible moves a board has. For chess this number is about 35, and so we have, for each ply an exponentially increasing number of nodes like 35^n, where n is the number of ply. Let’s consider too why it is so important having a correct move generator: just a single wrong move somewhere will mess an enormous amount of nodes.

 

 

 

 

 

average number of nodes per ply in chess:

1 35
2 1225
3 42875
4 1500625
5 52521875
6 1838265625

Of course this is just average data, can be even worst in some situation. You can always know the exact count of nodes by using the perft test contained in the same project, but I suggest you to start with a 5/6 ply and see how long it takes befor tryng 8/9 ;)

So some optimization is necessary since such an exponential explosion can’t be managed with any kind of CPU. The only game I know in which generating all the tree is probably tic-tac-toe, but for chess is absolutely not the case. So we introduce alpha beta pruning in our algorithm, but how can we prune some nodes despites to other? let’s have an example with the same position shown above, and suppose we move the Knight in c6 ( Nxc6), the black can catch it with the rock, or with the pawn, Rxc6 and  bxc6 respectively. In an alpha beta pruning scenario as soon such a move refute the white move, ie the move give a gain better than the current opponent better score, the search stops at that level. This is an enormous gain in term of performance, the only draw back is that we have just a lower bound of the actual score of a position, so we don’t really know if we can do better, but we stay on the fact that we can do enough. How this is achieved by code? Let see what we need:

  1. A way of score the position: material balance is more than enough for this sample.
  2. An algo that traverse the algo keeping track of the best score for a player ( alpha ) and for the opponent ( beta )
  3. A way to sort the move ordered so the “strongest” are seen first, the weak later.

Point 1 is easy, just give some value to each piece type, and sum it considering positive the white if the white is the player or vice-versa. The algorithm we will see soon, but the tricky part is the 3). As you probably guess, having good move navigated first, increment the changes of stops the search ( the so called beta-cut off ) with a dramatic performance increment. So the first real heuristic that will give your engine strength and personality is that function. In the example we will use a very basic ordering strategy, that put all promotion and good capture in front, all the “other” moves in the center, and the bad captures at the end. ( a good capture is one in which the catcher has less value or equal to the captured ).

So let’s show the “Vanilla” algorithm. Why “vanilla” ? because a real chess engine extends a lot this concepts,and add lot of other functionality to make the engine responsive, but the one shown do the job and it is ( hopefully ) as clear as understand as the pseudo code, whit the difference that it is working code you can inspect and debug and use for learn:

The interesting portion are the Search function. I used delegates to extract the non algorithm related code so it appear simple as pseudo code, but it is working. Then I wrote a test case using this search function here:

 

[TestMethod]
       public void TestQg6()
       {
           using (var rc = new RunClock())
           {
               var engine = new SynchronEngineAdapter(new SimpleVanillaEngine(7),
                   "2rr3k/pp3pp1/1nnqbN1p/3pN3/2pP4/2P3Q1/PPB4P/R4RK1 w - - 1 1"
                   );
               Assert.AreEqual("g3g6", engine.Search());
               Console.WriteLine("Elapsed milliseconds:" 
                   + rc.GetElapsedMilliseconds());
           }
       }

 

 

The code of the search is called by the class SimpleVanillaEngine, this is just a wrapper that inject the proper move generation  calls and evaluation/ordering functions. That test works in about 40 sec on my laptop, that is unacceptable for a real engine, but satisfying because… even if the code is simple, it report the correct answer, why can I say so ? because the board I proposed is some sort of standard test  for chess engines. Please note that the correct move Qg6 is reported in the test as g3g6 since our engine does not yet supports the human algebraic notation, but the move as you can guess is equivalent. This case is important because it show how an apparently wrong move can lead in a win if we look deep enough.

Well if interest in the project continue as it started, I will blog again on how to move this in a real engine.

Saturday, 03 December 2011 13:03:44 (GMT Standard Time, UTC+00:00)  #    Comments [0] - Trackback
Chess | CodeProject | CSharp | Games

# Sunday, 27 November 2011

In this post Ayende talk about when we should use NHibernate and he point that in almost read only scenario other approach can be preferred. I think he forget to mention the fact that even in such a scenario we can leverage a very reliable multi DB abstraction offered by NH that can help us if we think to target different data platforms. In order to me we should say that the point of decision to choose fro NH to another approach is the ability to create an entity model, and an entity model helpful to our objectives. This can also depends on how much we are confortable with the technology. Another interesting extension of the argument is, if we should not  use NH what can we use instead ? Well not for sure EF, since the reason of renounce to NH in a project should be the same to avoid EF. The NOSql solutions works only if we can completely avoid a relational database, and the pure crude ADO.NET is just ugly. An option could be Dapper,  a lightweight OR/M ( not exactly an OR/M, but almost ) that remove all the ugliness of ADO.NET and does not change the performance in comparison on using the manual data access approach. I did not tried it myself, but one of its users is stackoverlow, so this should be by itself a guarantee.

Sunday, 27 November 2011 08:56:37 (GMT Standard Time, UTC+00:00)  #    Comments [0] - Trackback
CodeProject | NHibernate | ORM

# Saturday, 26 November 2011

I decide to publish my chess engine plays on bitbucket. Well it is almost a redo from scratch of a complete but buggy chess engine I wrote in the past that I decided to rewrite just because it was difficult to stack into the old code what I learned. The version present when I write this post contains just the move generator and the complete test for it ( I used an other nice engine: roce to compare my perft test results against. What is a perft test ? Well it is a test to prove our engine produces, from a starting board, all the possible different boards in a certain number of ply, accordingly to the chess game rules. This test also give an idea on how fast is the strategy we use to generate moves, even if this can affect just in part the overall performance of the alpha/beta pruning, we should not write a slow blobby monster. Lets see below a session of the test working:

image

The strange string showing the position are board situations expressed in FEN Notation, that is almost the standard notation we use to talk about board situations. How many test does FelpoII move generator passes ? Well here is the file containing the FEN boards, with the depth plies move counts shown, there is a lot of positions, even tricky and generally challenging positions ( in term of rules ).

What is the performance ? Since almost all chess engine are written in C++ or in C, can a C# engine works at the same level of magnitude of performance ? Here below the performance we have for the starting position:

Depth: 6 119060324 moves 5,34 seconds. 22283422,048 Move/s. Hash Hit=1400809

The same test with roce ( that is a C++ chess engine ):

Perft (6): 119060324, Time: 4.208 s

So almost the same, that is good if we remember that we wrote in C# Smile We just used a little hack: as you probably know ( or will know if you will start playing with chess engines development ) chess engines uses hash tables to store information about a board ( by using hashes against Zobrist Keys ) this tabled is sored in an unamanaged big memory array, this achieved a really sensible increase in performances.

Well some more details about the engine:

  • It uses a 0x88 board representation
  • It uses object oriented code ( so it is easier to understand compared to traditional C++ engines )
  • The internal random numbers for the zobrist key are generated by a Mersenne Twister Generator, that really solved some nasty bug due to wrong hash conflicts when I used the standard random algo of c#.
  • It uses a trasposition table in unmanaged memory to increase performance.
  • It has a performance comparable ( at least in move generation ) with traditional C/C++ engines

 

What we can do next ?

Complete the engine with a good working Negamax alpha – beta pruning algorithm.

What can we do with the code as is ?

We can use the move generator as is to validate game moves in a two human player UI, or generating fancy images from FEN positions, write a WPF ( another ) chess board ( winboard compatible? so a lot of engines are already written for it) and so on.

Enjoy.

Saturday, 26 November 2011 10:11:13 (GMT Standard Time, UTC+00:00)  #    Comments [3] - Trackback
CodeProject | CSharp | Games

# Thursday, 17 November 2011

Even if Linq To NHibernate provider allow us to write query in a strongly type manner, it is sometimes needed to works with property names literally. For example in a RIA application a service can receive a column as a string containing the name of the property to order by. Since Linq to NHibernate is a standard Linq provider, we can leverage a standard dynamic linq parser. This is achieved by using an old code by MS, known as System.Linq.Dynamic. By following the link you will find a download location that point to an almost just a sample project that eventually contains the file Dynamic.cs that contains some extension method allowing to merge literal parts in a type safe linq query.

Let’see an example:

var elist = session.Query<MyEntity>()
              .OrderBy(“Name descending”)
              .Skip(first)
              .Take(count)
              .ToList();

I supposed we have a property called Name on the class MyEntity. The OrderBy taking a string as a parameter is an extension method provided by Dynamic.cs, and in order to have it working you just need to merge the file dynamic.cs in your project and import System.Linq.Dynamic. Of course you will have extension for Where and for other linq operators too.

Thursday, 17 November 2011 13:15:19 (GMT Standard Time, UTC+00:00)  #    Comments [5] - Trackback
CodeProject | NHibernate

# Thursday, 03 November 2011

As we know the Caliburn Micro library implements a screen conductor to handle multiple screen models with only one active, typically used for tabbed views, that is easy to implement by deriving your model from Conductor<IScreen>.Collection.OneActive. This works out of the box with the standard tab control, but it is not possible to use it for example with the tabbed documents in AvalonDock. The only solution I found, that for some reason I will say below, is this one. I don’t like this solution because it force to write code inside the view, that is not acceptable in a pure MVVM solution, so I preferred to insulate the code in an attached behavior. In addition the presented solution will works correctly with the Activate/Deactivate/CanClose strategy on each document. We just need to modify the view markup as in the example below:

As you can see we just added an attached property UseConductor.DocumentConductor that we bind to the current model. Of course the model is a OneActive screen conductor. The behavior take care to connect the document items of the DocumentPane with the screen conductor items. If each screen implements IScreen, the proper Activate/Deactivate/CanClose are called, so we can even handle the case of canceling the close of a dirty document. Here the attached behavior code: An example MainModel can be the following one:

( we just add some random document to see how it behave )

And here below an example of a single screen  model:

So we have the conductor, without touching the view code, and without creating a custom screen conductor.

Thursday, 03 November 2011 19:52:59 (GMT Standard Time, UTC+00:00)  #    Comments [6] - Trackback
Caliburn | WPF | CodeProject

# Tuesday, 11 October 2011

As I said in this post I' have been working on an helper library to encapsulate the extra work we should do in presenting big dataset to the user: limiting the resultset. I wrote a 5 minutes demo to show how the interface could appear reactive, based on NHibernate - Caliburn-and FlyFetch. This does not mean that Flyfetch depends on some way on these library, is just to have something to show very quickly. The present FlyFetch version is not bound to any presentation technology to ( even if RIA and WPF applications are probably the best candidates to use ).   The application requires to have AdventureWorks to so since the prerequisite are not trivial I decided to grab a little video of the sample app running, just to show the reactivity we can achieve with FlyFetch.

FlyFetch in a demo app

I managed to remove any proxy engine dependency ( I use an internal proxy factory to create a special custom proxy ), and I managed to have an MVVM friendly component since the pager works independently from the view.You are welcome to follow FlyFetch at this address, or simply fork your version here. As an update, I used flyfetch with success in production too :)

Tuesday, 11 October 2011 20:54:13 (GMT Daylight Time, UTC+01:00)  #    Comments [0] - Trackback
CodeProject | FlyFetch | MVVM | WPF

# Thursday, 06 October 2011

One of the most important feature to make a data based application reactive is to limit the resultset returned to the GUI. This is usually done in two places: the service implementation that must support some ranking strategy, and the GUI itself that has to leverage these features to show the user just the portion he need ( or he see ). I implemented an almost generic solution for WPF at work that leverage Linfu.DynamicProxy. The strategy consist in filling the (observable)collection with some fake objects that by interception drive the underlying data source to fetch the items in pages just when the user actually see them. I like that solution, but it introduces an extra dependency that I prefer to avoid, and indeed the proxy feature I need are so simple that an entire fully fledged library is too much. So I started a project that aims to be:

  • Zero dependency
  • Single file ( can add to your project without referencing any dll )
  • Never blocking the GUI
  • Easy to use
  • Data source independent
  • MVVM friendly

Here is the first start if you want to see, and contribute with ideas and suggestions. The name stays for Flyweight pattern Fetching strategy. The current solution is under development, don’t download if you need a running solution. I usually post just when things are in some way usable, this is an exercise for me too: presenting a work at its very early stages, in the hope to have contributions to create something *better* Smile

Update: here is some progress

Thursday, 06 October 2011 21:12:40 (GMT Daylight Time, UTC+01:00)  #    Comments [0] - Trackback
CodeProject | MVVM | WPF

My Stack Overflow
Contacts

Send mail to the author(s) E-mail

Tags
profile for Felice Pollano at Stack Overflow, Q&A for professional and enthusiast programmers
About the author/Disclaimer

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2017
Felice Pollano
Sign In
Statistics
Total Posts: 157
This Year: 0
This Month: 0
This Week: 0
Comments: 124
This blog visits
All Content © 2017, Felice Pollano
DasBlog theme 'Business' created by Christoph De Baene (delarou) and modified by Felice Pollano