Why Automapping is bad for you

Just recently I was almost convinced that Automapping is magic and we should use it. For those who doesn’t know what automapping is – it’s the way to automatically convert one model class to another “without writing code”. For instance you have an internal business model which you don’t want to expose via public REST API so you create a similar simplified class which you then serialise/deserialise to JSON or any other format and which is designed to be simple and dumb and be usable by dumber frameworks like JavaScript and similar. Or mapping classes generated by a database ORM tool (like Entity Framework) to your business model.

There are many frameworks in the .NET area which allow you to achieve this, the most popular and developed one is AutoMapper.

A few other alternatives include:

  1. EmitMapper
  2. ValueInjecter (I guess it should be called ValueInjector but who cares)
  3. OoMapper
  4. and many others

I love AutoMapper and even use it occasionally, therefore code examples in this post will be using it. So what’s so horrible about automapping?

It’s a horrible bag of bugs just waiting to pop up in production

Most of the automappers operate on the principle of automatic mapping. Apparently if it wouldn’t be why would you use a framework? My main argument against this is due to the fact it’s automatic and saves you a lot of time initially you will get bigger problems eventually. Automatic mapping works on a basis of trying to find which properties on either side of the model are similar. These two classes for instance are perfect example when it works well:

public class BusinessEntity
{
  public string FieldOne { get; set; }
  public string FieldTwo { get; set; }
}
 
public class WebApiEntity
{
  public string FieldOne { get; set; }
  public string FieldTwo { get; set; }
}

You have these two classes, properties are the same (or similar – a good framework will eliminate minor inconsistencies in naming) and as soon as a good programmer sees this an apparent question will pop up in his head – why the hell should I do this by hand when I can use reflection (or more advanced and faster methods like Emit).

So you define a mapping somewhere in the solution (automapper example):

Mapper.CreateMap<BusinessEntity, WebApiEntity>();
Mapper.CreateMap<WebApiEntity, BusinessEntity>();

Everyone is happy and things just work. Conversion is bleeding fast in runtime because frameworks do their job exceptionally well (most of them). Nothing to worry about, right?

That’s where you got it wrong.

Simple refactoring

Suppose the solution grows in size and you just forget how many model classes you have and which properties are for what. It doesn’t even have to, just go on a holiday for a week and see what happens after you come back. Someone decides that BusinessEntity.FieldOne now should be called BusinessEntity.CoolField. Mapping just eats this without any problem, however your public API now doesn’t return FieldOne because no one remembers how it maps in the large codebase, code compiles and API still work!

Integration testing

I think by this point many of you would think I’m wrong and this will be handled by integration tests. In reality there is no 100% coverage on any project, ever. In addition to that it’s just stupid, expensive, boring and time consuming to write tests which test that a property is returned – remember that good advice on unit testing not to test get/set properties – in essence you will be testing a mapping framework’s getters and setters!

Performance

That’s right, a proper mapping framework is really fast and performance is not an issue. However, even the best one still needs to discover mappable classes on application start (or first use) which adds considerably to application start-up time. Sometimes it’s not a problem, but often it is and you’ll be stuck in a situation when you can’t get rid of this horrible bottleneck.

Poor code quality

Every framework has it’s limitations, just some of them:

  • Mappable properties must be public, or be settable and gettable. Quite often you don’t want to do this. Not every entity is a plain old data structure, especially a rich domain model. A well designed model doesn’t allow you to set an object into invalid state which most of the mapping frameworks will require you to sacrifice these business rules.
  • Mappable classes must have a public constructor. Again, well designed model usually doesn’t have one, at least in most of the classes in business logic. You build the business logic safe and solid, using those best practices with overloaded constructors and factory methods.
  • Growing number of workarounds to make them mappable. Yes, you have to sacrifice naming, create fake factory properties etc – remember the horrible workarounds to make classes XML serializable with [XmlIgnore] and fake set/set.
  • Dead end in refactoring. You will get to the point when you can’t refactor because you don’t know who maps you business model and how, especially when mapping code is used outside of your control.

What’s the alternative

The alternative and the best solution is already built into C# language. It’s just called methods. Yeah, you’ve heard me. Using previous code samples and no mapping frameworks it would look like this:

public class BusinessEntity
{
  public string FieldOne { get; set; }
  public string FieldTwo { get; set; }
}
public class WebApiEntity
{
  public string FieldOne { get; set; }
  public string FieldTwo { get; set; }
 
  public static WebApiEntity FromBusinessEntity(BusinessEntity entity)
  {
    return new WebApiEntity {FieldOne = entity.FieldOne, FieldTwo = entity.FieldTwo};
  }
 
  public BusinessEntity ToBusinessEntity()
  {
    return new BusinessEntity {FieldOne = FieldOne, FieldTwo = FieldTwo };
  }
}

Conversion methods are coded inside the outer layer using business model.

Why is this better?

Simple refactoring

Refactoring is a dream. When I change the property name on either side I can see compilation error immediately and fix it in place. And when both projects are in the same solution the IDE will rename it everywhere for me!

Integration testing

Well I just don’t need it. Compiler has already checked it for me, right? Now I can use the QA resource for something interesting which potentially delivers a business value instead of checking for developer typos.

Performance

It’s the maximum performance you can possibly achieve. It doesn’t have initialization overhead, there is no reflection involved, the code is doing it’s job very well. Oh and you can use best coding practices here too, for instance cache the converted class based on some business rules etc.

Highest code quality

Having this none of your model need to adopt. You are using code you would normally use in writing business logic, constructors and factory methods will validate that you are creating the classes correctly.

Higher developer productivity

Some will argue, however it is. In order to map two classes with automapping framework you need to:

  • Make sure developers know how to use the framework.
  • Inspect two classes to make sure they are mappable.
  • Write mapping definitions, even if those are two lines of code.
  • Write mapping code in the actual method calls, which are longer and more error prone than just calling a conversion method on the class.
  • Run the project at least once to make sure it doesn’t fail in runtime.
  • Write integration tests covering a few permutations.
  • Make sure that developers who use the classes know how to map them too.

With manual mapping you need to:

  • Write one or two conversion methods (depending if you need to convert back as well). Most of the conversions still fit in one line.
  • Write a considerably smaller amount of integration tests, because you just know that mapping works.

I still think that Automapping is awesome

However I would only use it in a quick prototyping. If you are planning to use the code in production throw that away!


To contact me, send an email anytime or leave a comment below.