You are viewing the Articles in the Design Patterns Category

Guiding Design with Behavior Verification and Mock Objects

At some point every developer who has disciplined themselves in the ritualistic like art and science of Test Driven Development soon discovers that the collaborators on which a class under test depend introduce an additional layer of complexity to consider when writing your tests – and designing your APIs.

For example, consider a simple test against a class Car which has an instance of class Engine. Car implements a start method which, when invoked, calls the Engine object’s run method. The challenge here lies in testing the dependency Car has on Engine, specifically, how one verifies that an invocation of Car.start results in the Engine object’s run method being called.

There are two ways of testing the above example of Car, which in unit testing nomenclature is called the System Under Test (SUT), and it’s Engine instance which is Car's Depended-on Component (DOC). The most common approach is to define assertions based on the state of both the SUT and it’s DOC after being exercised. This style of testing is commonly referred to as State Verification, and is typically the approach most developers initially use when writing tests.

Using the above Car example, a typical State Verification test would be implemented as follows:

Figure 1. CarTest, State Verification.

From a requirements perspective and therefore a testing and implementation perspective as well, the expectation of calling start on Car is that it will A.) change it’s running state to true, and B.) invoke run on it’s Engine instance. As you can see in Figure 1, in order to test the start method on Car the Engine object must also be tested. In the example, using the State Verification style of testing, Car exposes the Engine instance in order to allow the state of Engine to be verified. This has lead to a less than ideal design as it breaks encapsulation and violates Principle of Least Knowledge. Obviously, a better design of Car.isStarted could be implemented such that it determines if it’s Engine instance is also in a running state; however, realistically, Engine.run will likely need to do more than just set its running state to true; conceivable, it could need to do much, much more. More importantly, while testing Car one should only be concerned with the state and behavior of Car – and not that of its dependencies. As such, it soon becomes apparent that what really needs to be tested with regards to Engine in Car.start is that Engine.run is invoked, and nothing more.

With this in mind, the implementation details of Engine.run are decidedly of less concern when testing Car; in fact, a “real” concrete implementation of Engine need not even exist in order to test Car; only the contract between Car and Engine should be of concern. Therefore, State Verification alone is not sufficient for testing Car.start as, at best, this approach unnecessarily requires a real Engine implementation or, at worst, as illustrated in Figure 1, can negatively guide design as it would require exposing the DOC in order to verify its state; effectively breaking encapsulation and unnecessarily complicating implementation. To reiterate an important point: State Verification requires an implementation of Engine and, assuming Test First is being followed (ideally, it is), the concern when testing Car should be focused exclusively on Car and it’s interactions with its DOC; not on their specific implementations. And this is where the second style of testing – Behavior Verification – plays an important role in TDD.

The Behavior Verification style of testing relies on the use of Mock Objects in order to test the expectations of an SUT; that is, that the expected methods are called on it’s DOC with the expected parameters. Behavior Verification is most useful where State Verification alone would otherwise negatively influence design by requiring the implementation of needless state if only for the purpose of providing a more convenient means of testing. For example, many times an object may not need to be stateful or the behavior of an object may not always require a change in it’s state after exercising the SUT. In such cases, Behavior Verification with Mock Objects will lead to a simpler, more cohesive design as it requires careful design considerations of the SUT and it’s interactions with its DOC. A rather natural side-effect of this is promoting the use of interfaces over implementations as well as maintaining encapsulation.

For testing with Behavior Verification in Flex, there are numerous Mock Object frameworks available, all of which are quite good in their own right and more or less provide different implementations of the same fundamental concepts. To name just a few, in no particular order, there are asMock, mockito-flex, mockolate and mock4as.

While any of the above Mock Testing Frameworks will do, for the sake of simplicity I will demonstrate re-writing the Cartest using Behavior Verification based on mock4as – if for nothing other than the fact that it requires implementing the actual Mock, which helps illustrate how everything comes together. Moreover, the goal of this essay is to help developers understand the design concepts surrounding TDD with Behavior Verification and Mock Objects by focusing on the basic design concepts; not the implementation specifics of any individual Mock Framework.

Figure 2. CarTest, Behavior Verification approach.

Let’s go through what has changed in CarTest now that it leverages Behavior Verification. First, Car's constructor has been refactored to require an Engine object, which now implements an IEngine interface, which is defined as follows.

Figure 3. IEngine interface.

Note Engine.isRunning is no longer tested, or even defined as, it is simply not needed when testing Car: only the call to Engine.run is to be verified in the context of calling Car.start. Since focus is exclusively on the SUT, only the interactions between Car and Engine are of importance and should be defined. The goal is to focus on the testing of the SUT and not be distracted with design or implementation details of it’s DOC outside of that which is needed by the SUT.

MockEngine provides the actual implementation of IEngine, and, as you may have guessed, is the actual Mock object implementation of IEngine. MockEngine simply serves to provide a means of verifing that when Car.start is exercised it successfully invokes Engine.run; effectively satisfiying the contract between Car and Engine. MockEngine is implemented as follows:

Figure 4. MockEngine implementation.

MockEngine extends org.mock4as.Mock from which it inherits all of the functionality needed to “Mock” an object, in this case, an IEngine implementation. You’ll notice that MockEngine.run does not implement any “real” functionality, but rather it simply invokes the inherited record method, passing in the method name to record for verification when called. This is the mechanism which allows a MockEngine instance to be verified once run is invoked.

CarTest has been refactored to now provide two distinct tests against Car.start. The first, testStartChangesState(), provides the State Verification test of Car; which tests the expected state of Car after being exercised. The second test, testStartInvokesEngineRun(), provides the actual Behavior Verification test which defines the expectations of the SUT and verification of those expectations on the DOC; that is, Behavior Verification tests are implemented such that they first define expectations, then exercise the SUT, and finally, verify that the expectations have been met. In effect, this verifies that the contract between an SUT and its DOC has been satisfied.

Breaking down the testStartInvokesEngineRun() test, it is quite easy to follow the steps used when writing a Behavior Verification test.

And that’s basically it. While much more can be accomplished with the many Mock Testing frameworks available for Flex, and plenty of information is available on the specifics of the subject, this essay quite necessarily aims to focus on the design benefits of testing with Behavior Verification; that is, the design considerations one must make while doing so.

With Behavior Verification and Mock Objects, design can be guided into existence based on necessity rather than pushed into existence based on implementation.

The example can be downloaded here.

Thoughts on Cairngorm 3

A week or so prior to MAX, the Cairngorm committee had a rather interesting discussion, during which Alex outlined what the team at Adobe Technical Services had been considering for Cairngorm 3. The meeting was focused on providing everyone with an overview of the collective ideas which Adobe had been gathering internally for some time now, and to also inquire feedback prior to the public announcement of Cairngorm 3.

Prior to the meeting I had anticipated the discussion would be based around a few new patterns and best practices which are currently being advocated, and possibly some additional libraries which help to address recent challenges in RIA development. However, what we discussed was actually quite different – in a good way.

As you are probably aware by now, Cairngorm 3 is focused around tried and tested best practices and guidelines which aid Flex developers in providing solutions to their day to day RIA challenges. These guidelines are primarily based upon that which has been realized by Adobe Technical Services, and also from the Flex community at large. Teams can leverage these guidelines where applicable to help deliver successful RIAs using frameworks of their choosing. While there may be specific frameworks and libraries recommended in Cairngorm 3, these are just that – recommendations. There is generally a framework agnostic approach which I must emphasize is highly preferable to that of suggesting one framework over another. This is precisely what I think is needed in the Flex community, for there is rarely a one size fits all approach to software architecture, especially in terms of specific framework implementations. This is a pretty easy concept to comprehend, as, what works for one team, in one context, may not always be appropriate for another team, or in another context.

Cairngorm 3 is a step forward towards what (IMHO) should be a general consensus in the Flex community at large; there are many existing frameworks out there which help address specific problems, with each providing unique qualities and solutions in their own right. This is the kind of thought leadership which helps a community progress and grow; it should be encouraged, as allowing for the shared knowledge of fundamental design principles and guidelines is something which provides value to all Flex developers, regardless of which framework they happen to prefer.

If there is one suggestion I would propose, it would be to have an entirely new name for these collections of best practices, guidelines and general Flex specific solutions. Personally, I would like to see the name Cairngorm (which, after all these years, I still pronounce as Care-in-gorm) refer to the original MVC framework, i.e. the framework implementation itself, as keeping the name the same while undergoing a different direction is bound to cause confusion to some extent. Whatever the new name would be is insignificant as long as the original name of Cairngorm applied to that of the actual framework implementation. This would perhaps be more intuitive as it would allow for the name Cairngorm to be used to describe a concrete framework as a potential solution, just as one could describe other frameworks; e.g. Spring ActionScript, Mate, Swiz, Parsley, Penne, Model-Glue, PureMVC, Flicc etc.

Most importantly, however, is the prospect of choice, as choice is always a good thing. Moreover, an initiative being lead by Adobe in this area sends a very good message to the Flex community as a whole. I happen to leverage a number of different frameworks and patterns which address different problems. As new problems arise, I employ new solutions where existing solutions may not suffice, or develop custom solutions where none are currently available; never blindly choosing one solution over another. However, in every case, there are typically some basic, fundamental guidelines which hold true and can be followed to help drive a design in the right direction. Regular readers of this blog have probably noticed that the basis of the majority of my posts are heavily rooted within these fundamental design principles, as it is from these fundamental design principles and guidelines that developers can utilize the frameworks which work best for them in order to and meet their specific challenges.

Essentially, Software Architecture is all about managing complexity, and there are many fundamental patterns and guidelines which can help developers mange this complexity. The specific framework implementations are of less concern, for it is the understanding of these patterns and principles – and more importantly, when to apply them, which will ultimately drive the decisions to leverage a one framework over another. In my experience, I have found that the only constant in software architecture is that a pragmatic approach should be taken whenever possible, whereby context is always key, and simplicity is favored as much as possible. Cairngorm 3, I feel, is a nice illustration of this principle.

Vector Iterator for Flex

One of the many welcome additions to the Flex 3.4 SDK is the inclusion of the Vector class. Vectors in particular are especially welcome as they provide compile time type safety over what would otherwise not be available when implementing custom solutions, such as a typed collection.

Essentially, Vectors are just typed Arrays. And while not as robust or powerful, Vectors are similar to Generics in C# and Java. When it is known at design time that a collection will only ever need to work with a single type, Vectors can be utilized to provide type safety and also to allow for significant performance gains over using other collection types in Flex.

I recently wanted to convert quite a few typed Array implementations to Vectors, however, the Arrays were being traversed with an Iterator. In order to reduce the amount of client code which needed to be refactored I simply implemented a Vector specific Iterator implementation.

If you are familiar with Iterator Pattern in general, and the Iterator interface in particular, then usage will prove to be very straight forward. You can use the Vector Iterator to perform standard iterations over a Vector. Below is an example of a typical client implementation:

Using an Iterator with a Vector ensures only a linear search can be performed, which proves useful with Vectors as they are dense Arrays. However, one consideration that must be made when using an Iterator with a Vector is that you loose type safety when accessing items in the Vector via iterator.next(). It is because of this I would suggest only using Iterator’s with Vectors to support backwards compatibility when refactoring existing Arrays which are being used with existing Iterators.

The VectorIterator and it’s associated test are available below:
VectorIterator
VectorIteratorTest

Cairngorm Abstractions: Commands and Responders

It is quite common to find a significant amount of code redundancy in Flex applications built on Cairngorm. This is by no means a fault of the framework itself, actually quite the contrary as Cairngorm is designed with simplicity in mind; opting to appropriately take a less-is-more approach in favor of providing a more prescriptive framework which only defines the implementation classes necessary to facilitate the “plumbing” behind the framework. Everything else is really just an interface.

With this amount of flexibility comes additional responsibility in that developers must decide what the most appropriate design is based on their applications specific context. Moreover, as with any design there is never a truly one size fits all approach which can be applied to any problem domain; there are really only common patterns and conventions which can be applied across domains and applications. This IMHO is what had allowed the framework to be a success and it is important to understand that this simplicity also requires developers to give their designs the same attention one would to any Object Oriented design.

However over the years I have found a significant amount of redundancy found in Flex applications built on Cairngorm. This appears to be (more often than not) the result of developers implementing Cairngorm examples verbatim in real world applications, and in doing so failing to define proper abstractions for commonly associated concerns and related responsibilities. The most common example of this is the typical implementation of Commands, Responders BusinessDelegates and PresentationModel implementations.

For some of you this may all seem quite obvious, and for others hopefully this series will provide some insight as to how one can reduce code redundancy across your Cairngorm applications by implementing abstractions for common implementations.

This topic will be a multi-part series in which I will provide some suggestions surrounding the common patterns of abstractions which can be implemented in an application built on Cairngorm, with this first installment based on common abstractions of Cairngorm Commands and Responders. Other areas in future posts will cover Business Delegate and Presentation Model abstractions. So let’s get started…

Command Abstractions
First let’s begin by looking at what is arguably the simplest abstraction one could define in a Cairngorm application to simplify code and eliminate areas of redundancy – Command abstractions. This example assumes the concern of mx.rpc.IResponder implementations is abstracted to a separate object. For more on this subject see my post regarding IResponder and Cairngorm.

A traditional Cairngorm Command is typically implemented as something to the extent of the following:

The problem with the above Command implementation is that it results in numerous look-ups on the ModelLocator Singleton instance in every execute implementation which needs to reference the ModelLocator.

A simpler design would be to define an abstraction for all commands which contains this reference. as in the following:

As in any OO system there are many benefits to defining abstractions and a good design certainly reflects this. For example, just by defining a very basic abstraction for all Commands we have now eliminated the number of look-ups on the ModelLocator for every Command in the application as well as redundant imports. By defining an abstraction for common references your code will become easier to read and maintain as the number of lines of code will certainly become reduced.

Commands are by far the easiest to create an abstraction for as most commands will typically reference the ModelLocator, and if so they could do so simply by extending an AbstractCommand, if not they would implement ICommand as they traditionally would.

So the first example could now be refactored to the following:

You could take these abstractions a step further and define additional abstractions for related behavior and contexts, all of which would also extend the AbstractCommand if a reference to the applications ModelLocator is needed.

Responder Abstractions
Now let’s take a look at an abstraction which is much more interesting – Responder abstractions. In this example we will focus on the most common Responder implementation; mx.rpc.IResponder, however the same could easily apply for an LCDS Responder implementation of a DataService.

A separate RPC responder could be defined as an abstraction for HTTPServices, WebServices and RemoteObjects as each request against any of these services results in a response of either result or fault, hence the IResponder interface’s contract.

For example, consider a typical Responder implementation which could be defined as follows:

By defining a Responder abstraction each concrete Responder implementation would result in significantly less code as the redundant cast operations could be abstracted, and, as with Command Abstractions, a convenience reference to the application specific ModelLocator could also be defined. Moreover, a default service fault implementation could be defined from which each service fault could be handled uniformly across the application.

Thus we could define an abstracttion for RPC Responders as follows:

We could now refactor the original Responder implementation to the following simplified implementation:

As you can see just be pulling up common references and functionality to just two abstractions we can significantly remove redundancy from all Commands and Responders. As such this allows designs to improve dramatically as it allows for the isolation of tests and limits the amount of concrete implementation code developers need to sift through when working with your code.

It is important to understand that a design which is built in part on Cairngorm must still adhere to the same underlying Object Oriented Design principles as any other API would, and in doing so you will end up with a much simpler design which can easily scale over time.

Pattern Recognition

It has been said that the true sign of intelligence lies in ones ability to recognize patterns – and there is a lot to be said of that statement as patterns can be found everywhere, in everything, in everyday life.

One of the greatest strengths of human intelligence is in our ability to recognize patterns and abstract symbolic representations even when they occur in contexts different from that in which we originally learned them. It’s why hard to grasp concepts which are foreign or new to us become very clear when explained through metaphor.

This ability to recognize patterns is essential to our survival, always has been. For example, practically all ancient civilizations had a very, very good understanding of the recurring patterns in their environment; something we like to call seasons. This understanding of patterns in time and climate was crucial to the survival of these early civilizations. Our ability to recognize patterns is essential to our learning and understanding of the world around us. Pattern recognition is a cognitive process much like intuition. Arguably they are inter-related or possibly one and the same.

Suppose you you want to lose a few pounds, or save a little extra money, or learn a new programming language etc. but you are not seeing the results you would like. By recognizing patterns in your behavior you will begin to notice areas which need to be adjusted and from that determine an appropriate solution and the necessary adjustments to be made in order to achieve your goal. For example, maybe you’ve been trying to save some extra money and after a few months realize you are getting nowhere. You then analyze your behavior for recurring patterns and realize your spending half your pay every weekend on beer, just kidding, but you get my point.

Pattern Recognition in Software Development

In the world of software development patterns apply in pretty much just the same way – our ability to recognize them is essential to ensuring the success of a software application. When we discover patterns of recurring problems in software we are then able to consider various potential patterns from a catalog of named solutions, i.e. Design Patterns. Once an appropriate solution is found we can apply it to resolve the problem regardless of the domain.

When designing software, patterns are something that should reveal themselves, never forced into a design. This is how patterns should always be applied; you have a problem, and based on that problem you begin to recognize common patterns, or maybe new ones, which can be applied as a solution to resolve the problem. It should never be the other way around, that is, a solution (Pattern) looking for a problem. However this happens quite often and is pretty evident in many software applications. Many refer to this as “pattern fever“, personally I like to call it “patterns for patterns sake“, or simply “for patterns sake“. Because that’s really what it is.

For example, have you ever found a Singleton implementation where an all static class would have sufficed (e.g. utilities). Or a code behind implementation class which masquerades as an abstract class. Or an Interface where there is clearly only a need for a single concrete implementation (e.g. data centric implementations), or a marker Interface which serves no purpose at all. The list goes on and on.

In some cases it very well may just be an innocent flaw in the design, however the majority of the time it’s a tell tale sign of someone learning a new pattern and knowingly, albeit, mistakenly, attempting to implement the pattern into production code. This is clearly the wrong way of learning a new pattern. Learning new design patterns is great and a lot of fun but remember, there is a time and place for everything, and production code isn’t it.

Learning Patterns

One of the best ways to learn a new pattern (or anything new for that matter) is to explore it. Begin by reading enough about it to get some of the basic concepts to sink in a bit. Put it into context, think of it in terms of metaphor – in ways that make sense to you, remember you are learning this. Question it. Then experiment with it. See how it works, see how it doesn’t work, break it, figure out how to put it back together, and so on, but whatever works best for you. Most importantly always do it as a separate effort such as a POC, but never in production code.

Once you get this down and understand the various patterns you’ll find you never need to look for them, for if they are needed they will reveal themselves sure enough.

What makes a good design?

One of my core job responsibilities for the past several years has been to conduct technical design and implementation (code) reviews during various phases of the software development life cycle. This is typically a highly collaborative process whereas myself and an individual engineer, or the team as a whole will begin by performing a detailed analysis of business requirements in order to gain an initial understanding of the specific component(s) being developed. Once an understanding of the requirements has been reached a brainstorming session ensues which ultimately leads to various creative, technical solutions. After discussing the pros and cons of each the best solutions quickly begin to reveal themselves, at which point it is simply a process of elimination until the most appropriate solution has surfaced.

The next step is to translate the requirements into the proposed technical solution in the form of a design document. The design is specified on a high level and is only intended to provide an overview of the appropriate technical road map which is to be implemented. This typically consists of higher level UML Sequence and Class diagrams, either in the form of actual diagrams produced in a UML editor, or could simply be a picture captured from UML drawn out during a whiteboarding session. The formality of the documented design is less important, what is important is that the design is captured in some form before it is implemented. Implementation specific details such as exact class and method signatures and so forth are intentionally left out as they are to be considered outside the scope of the design. See Let Design Guide, not Dictate for more on this subject. Once the design is documented it is reviewed and changes are made if needed. This process is repeated until all business and technical requirements have been satisfied, at which point the “all clear” is given to move forward with implementing the design.

But what exactly constitutes a good design? How does one determine a good design from a bad one? In reality it could vary significantly based on a number of factors, however in my experience I have found a design can almost always be judged according to three fundamental criterion: Correctness, Cohesion / Coupling and Scalability. For the most part everything falls into one of these three categories. Below is a brief description of the specific design questions each category sets out to determine.

  • Correctness
    Does the design solve the problems described in the requirements and discussed by the team? This is Correctness in the form of satisfying business requirements. Are the patterns implemented in the design appropriate, or are additional patterns being used just for the sake of using the pattern? This is Correctness in the form of technical requirements. A good design is well focused and only strives to provide a solution which meets the requirements specified by the business owners, client etc; it does not attempt to be overly clever.
  • Cohesion / Coupling
    Has a highly cohesive, loosely coupled design been achieved? Have the classes, interfaces and APIs been logically organized? Does each provide a specific, well-defined set of functionality? Is composition used over inheritance where applicable? Has related functionality been properly abstracted? Does changing this break that, does adding that break this, etc.
  • Scalability
    Does the solution scale well? Is it flexible? A good design strives to facilitate change with confidence, and with as little risk as possible. A good design also achieves transparency at some level in the areas where it is most applicable.

The concepts outlined above are crucial to achieving a good design, however they are often overlooked or misunderstood to some degree. Throughout the years I have began to recognize some commonality in the design mistakes I find in Object Oriented Designs in general, and within Flex projects in particular. Many of which typically can be attributed to violations of basic MVC principles, but most commonly the design mistakes appear to be a negation of Separation of Concerns (SoC).

There are close relationships between Correctness, Cohesion / Coupling and Scalability, each of which plays a very significant role in the resulting design as a whole.

So lets start with Correctness, which is by far the single most important facet of design, for if the design does not provide a solution which satisfies the requirements specified then it has failed – all other aspects of the design are for the most part, details.

It is important to understand that Correctness has a dependency on Flexibility. For example, as architects and developers our understanding of the problem domain is constantly evolving as we gain experience in the domain. Additionally, as requirements may change significantly as a product is being developed, our designs must be able to adapt to these changes as well. Although this poses some challenges it is wrong to suggest that requirements need to be locked down completely before the design phase begins, but rather requirements need only be clearly defined to the extent that the designer is aware of what is required at that point in time and how it fits into the “big picture”. A competent designer understand this well and makes careful considerations before committing to any design decisions. This is where the importance of Flexibility comes into play. In order for a design to be conceptually and technically correct it needs to be flexible enough to support change. This is why good design is so important – to easily facilitate change. As such the flexibility to allow change should be evident throughout the design. A good example might be where the middle-tier has not decided which service layer implementation will be used (e.g. XML:80, WSDL, REST etc.), or the Information Architects have not decided what the constraints of each user role will be. A good design should be flexible enough to allow for changes such as these as well as others with confidence and more importantly, little risk to other parts of the application; after all, you shouldn’t have to tear down the house just to renovate the bathroom – in addition to Correctness and Scalability, this is where Cohesion and Coupling come into play.

High Cohesion is vital to achieving a good design as it ensures related functionality and responsibilities are logically grouped together, encapsulated and abstracted. We have all seen the dreaded, all encompassing class which assumes multiple responsibilities. Classes such as these have low cohesion and are a sign of future challenges if not addressed immediately. On a higher level, if high cohesion had not been achieved it is easy to notice as there will typically only be one class which comprises an entire API, however quite often low cohesion in classes may be a bit more subtle than one might expect and a code review will reveal areas where low cohesion has been implemented.

For example, consider the following Logging facility which is intended to provide a very simple logging implementation:

The above example is such a classic case of low cohesion. I see this kind of thing all the time. The problem here is that the Logger class has low cohesion because it is assuming the responsibility of creating and formatting a time stamp, this functionality is outside of the responsibilities of the Logging API. The creating and formatting of a time stamp is not a concern of the Logger, but rather would be the responsibility of a separate DateFormatting utility whose sole purpose is to provide an API for formatting Date objects. Removing the Date formatting functionality from the Logging API to a class which is responsible for formatting Date objects would facilitate code reuse across many APIs, reduce redundancy and testing as well as allow the Logger class to only define operations which are directly related to Logging. A good design must achieve high cohesion if it is to be successful.

Coupling is essential in determining a good design. A good way to think of coupling is like this: Think back to when you were a kid playing with blocks, you could easily take any number of different blocks and rearrange them to build whatever you like – that’s loose coupling. Now compare that to a crossword puzzle or a jigsaw puzzle, the pieces only fit together in a very specific way – that’s tight coupling. A good design strives to achieve loosely coupled APIs in order to facilitate change as well as reuse. A classic, yet less commonly mentioned example tight coupling is in the packaging of APIs. Often, many times designers will achieve loosely coupled APIs however the APIs themselves are tightly coupled to the application namespace.

Consider the of Logging API example from above, note that the API is defined under the package com.somedomain.someproject.logging. Even if the example were to be refactored to achieve high cohesion it would still be tightly coupled to the project specific namespace. This is a bad design as in the event another product should need to use the Logging API it would first need to be refactored to a common namespace. A better design would be to define the Logging API under the less specific namespace of com.somedomain.logging. This is important as the Logging facility itself should be generic in that it could be used across multiple projects. Something as simple as proper packaging of generic and specific components plays a key role in a good design. A better design for the above example would be as follows, this design achieves both high cohesion and loose coupling:

As with all design, technical design is subjective. Architects and Engineers can spend an infinite amount of time debating the various points of design. In my experience it really comes down to organization and efficiency, that is, organization of responsibilities and concerns, and the efficiency of their implementation both individually and as a whole.

It may sound cliche’ however before you begin a new design, or review an existing one, consider the following quote before doing so – it pretty much sums up what good design is:

“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.”
– Antoine de Saint-Exupery