Martin Fowler once wrote an article “AnemicDomainModel” decrying the “anti-pattern” of a service layer that executes actions (as opposed to delegating them to an entity), combined with entities that are just data buckets of getters and setters.
His claim is that this kind of application architecture is wrong because “…it’s so contrary to the basic idea of object-oriented design; which is to combine data and process together.”
Object oriented design, i.e. designing around conceptual “object” blocks that model distinct entities in your system, has always been a bit of a flawed model, in my opinion. When you go shopping, let’s say to buy a DVD, you grab a basket and wander down to the DVD aisle. You check for the next boxed set of your favourite TV series, grab a copy, put it in your basket, wander across to the till (yeah yeah, no-one gets a basket for one item, just roll with it for now) and pay.
When you “added” your item to the basket, the product did nothing, and neither did the basket; the entity that performed the action was your hand. That’s because products and baskets can’t do anything, a product is just an inanimate thing (obviously there are exceptions to this, such as if you’re buying a puppy, but there are exceptions to almost everything), and a basket is just a receptacle for inanimate (or animate) things. Still think your product needs an AddToBasket(Basket basket)?
So, if we’re building an eCommerce system (a full-on eco-system, not just the B2C frontend; warehouse management, invoicing etc.), which ones of these will have product data? I’m going to say all of them.
You’ve got options at this point. You can say “No! I’m not going to pollute my applications with shared logic and shared datatypes and shared… well, anything”. I admit, there are some serious downsides to having shared functionality, the main one being the inability to make changes without risk of regression. But risk can be managed, and the organisational upsides of the development velocity that can come from only writing code once, as opposed to once per system, are massive.
Common functionality is unlikely on your Product class. Warehouses don’t have baskets; customers don’t buy quarterly overviews; your accountant probably doesn’t care too much about how much stuff fits onto a pallet. But common data is very much the case, everyone will need to know what the product is (warehouses need to know that Product #45126 is a crate of beans, not the wrongly-marked ostrich steak they can’t fit on a pallet; the accountant will need to know that a crate of tennis balls can’t possibly be priced at $4,000 a pop), everyone will need pretty much the same level of basic information.
How does this fit into a domain model?
In the ADM (I would like to coop this as the moniker for this pattern, I don’t even slightly think it’s insulting), our “Domain model” is only “anaemic” in the data layer. We have a full-blown domain model at the “service” layer. There’s just an extra conceptual leap between having a polymorphic product and a polymorphic service.
Under “normal” circumstances (old-skool circumstances, more like) you’ll probably have a factory that takes the “productType” attribute from the raw dataset and decides which type of product to instantiate, then whichever class that’s handling the basket addition delegates the AddToBasket() call to the newly-instantiated SubProduct.
With the newschool (super-awesome) method, we’ll probably have a factory that takes the “productType” attribute from the Product class and decides which type of IBasketHandler to instantiate, then whatever class that’s handling the basket addition delegates the AddToBasket() call the to newly-instantiated SubProductBasketHandler.
See the difference? That’s right, very little. But in terms of separation of concerns, well, we’ve gained a whole lot. Products don’t need to know anything, so there’s no danger of one of the juniors deciding that it’s fine to have the Product(string productId) constructor load the product data from the database (performance and maintenance problem, here we come); IoC will come naturally, testing is much easier with the kind of statelessness that comes with this pattern.
If we’re talking semantics, yeah, we don’t have a “domain” model anymore, we have a data model and a process model, and they’re separate. But really, do your products have behaviour, or do various procedures happen to the products? Data and process should be separate, they’re fundamentally different things; behaviour is dictated by data, data is dictated by behaviour, but there’s a fundamental divide between the two, in that process can (and will) change and data is generally so static that it simply cannot.
So, who’s right? Well, I’d say that both models are appropriate, depending on context. If you’ve not got an ecosystem, and you’re fairly sure your single system will always be the one and only, and you’re confident you’ll get the best results from mixing process and data then go for it. If you’re worried about maintainability and have more than one interlinked system, then think seriously about what’s going to give you the best results in the long term. Just be consistent, and if you’re not, be clear about why you’re deviating from your standards or you’ll end up with an unholy amalgamation of different patterns.
This post originally appeared on Ed’s personal site.