You are viewing the Articles in the Refactoring Category

Multiple Form Factor Software Design

I have been giving a lot of thought lately about designing software in a Multi-Form Factor paradigm and felt I would share some initial thoughts on the subject. Keep in mind much of this is still quite new and subject to change; however, I have made an attempt to isolate what I feel will remain constant moving forward.

First, User Experience Design

My initial thoughts on the implications of what an ever growing Multi-Form Factor paradigm will have on the way we think about the design of software are primarily concerned with User Experience Design. While using CSS3 media queries to facilitate dynamic layouts will be needed for most Web Applications, I do not believe these types of solutions alone will allow for the kinds of compelling experiences users have come to expect, especially as they will likely compare Mobile Web Application experiences to their native counterparts. Sure some basic solutions will be needed, and for some simple websites they may suffice. However, in the context Web Applications, as well as just about every application developed specifically for a PC, too, I believe UX Design will need to leverage the unique opportunities presented by each particular form factor, be it a PC, smartphone, tablet or TV. Likewise, UX will need to account for the constraints of each form-factor as well. Architecturally, all of the above presents both opportunity and challenge.

To further illustrate this point, consider the fact that it is arguably quite rare that a UX Design intended for users of a PC will easily translate directly to a Mobile or Tablet User Experience. The interactions of a traditional physical keyboard and mouse do not always equate to those of soft keys, virtual keyboards and touch gesture interactions. Moreover, the navigation and transitions between different views and even certain concepts and metaphors are completely different. In simplest terms; it’s not “Apples to Apples”, as the expression goes.

With this in mind, as always, UX Design will need to remain at the forefront of Software Design.

Second, Architecture

Multi-Form Factor design obviously poses some new Architectural challenges considering the growing number of form factors which will need to be taken into account. The good news is, most existing, well designed software architectures may have been designed with this in mind to a certain degree. That is, the key factor in managing this complexity I believe will be code reuse; specifically, generalization and abstraction. A common theme amongst many of my posts, code reuse has many obvious benefits, and in the context of Multi-Form Factor concerns it will allow for different device specific applications to leverage general, well defined and well tested APIs. A good example being a well designed RESTful JSON service.

Code reuse will certainly be of tremendous value when considering the complexities encountered with Multi-Form Factor design. Such shared libraries, APIs and Services can be reused across applications which are designed for particular Form-Factors or extended to provide screen / device specific implementations.

Some Concluding Thoughts

In short, I believe both users and developers alike will be best served by providing unique User Experiences for specific Form Factors as opposed to attempting to adapt the same application across Multiple Form Factors. One of the easiest ways of managing this complexity will inevitably be code reuse.

I also believe the main point of focus should be on the medium and small form factors; i.e. Tablets and Smart phones. Not only for the more common reasons but, also because I believe PCs and Laptops will eventually be used almost exclusively for developing the applications which run on the other form factors. In fact, I can say this from my own experiences already.

While there is still much to learn in the area of Multi-Form Factor Design, I feel the ideas I’ve expressed here will remain relevant. Over the course of the coming months I plan to dedicate much of my time towards further exploration of this topic and will certainly continue to share my findings.

Practices of an Agile Developer

Of the many software engineering books I have read over the years, Practices of an Agile Developer in particular continues to be one book I find myself turning to time and time again for inspiration.

Written by two of my favorite technical authors, Andy Hunt and Venkat Subramaniam, and published as part of the Pragmatic Bookshelf, Practices of an Agile Developer provides invaluable, practical and highly inspirational solutions to the most common challenges we as software engineers face project after project.

What makes Practices of an Agile Developer something truly special is the simplicity and easy to digest format in which it is written; readers can jump in at any chapter, or practically any page for that matter, and easily learn something new and useful in a matter of minutes.

While covering many of the most common subjects on software development, as well as many particularly unique subjects, it is the manner in which the subjects are presented that makes the book itself quite unique. The chapters are formatted such that each provides an “Angel vs. Devil on your shoulders” perspective of each topic. This is quite useful as one can briefly reference any topic to take away something useful by simply reading the chapters title and the “Angel vs. Devil” advice, and from that come to a quick understanding of the solution. Moreover, each chapter also provides tips on “How it Feels” when following one of the prescribed approaches. The “How it feels” approach is very powerful in that it instantly draws readers in for more detailed explanations. Complimentary to this is the “Keeping your balance” suggestions which provide useful insights to many of the challenges one might face when trying to apply the learnings of a particular subject. “Keeping your Balance” tips answer questions which would otherwise be left to the reader to figure out.

I first read Practices of an Agile Developer almost 4 years ago, and to this day I regularly find myself returning to it time and time again for inspiration. A seminal text by all means, I highly recommend it as a must read for Software Developers of all levels and disciplines.

Domain Models and Value Objects

The other day a friend asked me what is the difference between a Value Object and a Domain Model, and when I would suggest using one over the other?

Since I have been asked this very same question quite a few times, I thought it might prove useful to provide a brief definition in the context of a language agnostic idiom which could serve as a point of reference for others as well. Thus, below is general definition of each.

Domain Models

A Domain Model is anything of significance which represents a specific business concept within a problem domain. Domain Models are simply classes which represent such concepts by defining all of the state, behavior, constraints and relationships to other Domain Models needed to do so. Essentially, a Domain Model “models” a domain concept, such as a Product, a User, or anything which could be defined within a problem domain itself, outside of the context of code.

Domain Models promote reuse and eliminate redundancy by defining specific classes which encapsulate business logic, state, behaviors and relationships. As business domain concepts change, so to do the implementations of the Domain Models.

Value Objects

As the name implies, a Value Object, more commonly referred to as a VO, is an object which simply provides values, nothing more.

Value Objects are entirely immutable; that is, all properties are read-only and assignments to those properties are specified only during object creation; after which, properties can not be modified and, by design, should not require changes.

Value Objects are typically used to provide an aggregation of conceptually related properties whose values describe the initial state of the object when instantiated and do not require any real concept of identity or uniqueness. While there are some edge cases (such as validation), more commonly than not, Value Objects do not implement any specific behavior. Conceptually, think of a Value Objects as being nothing more than an object which holds a value, or series of related values, which describe something about the object when created.

It is important to make the distinction between Value Objects and Domain Models, as a Value Objects is not a Model, but rather, it is nothing more than an object which holds values and could be used to describe any particular context. Perhaps a good example of a Value Object could be a JSON object returned from the server. That is a Value Object. A Domain Model could then wrap the Value Object in order to provide state changes, validation and behaviors.

And that’s it

Hopefully the above descriptions of both Domain Models and Value Objects will clear up any confusion surrounding the two concepts; ideally, making it easier to understand when to use each.

The point to keep in mind is that Domain Models simply model a business concept, including it’s rules, constraints and behaviors, while Value Objects simply describe a contextual state.

Misplaced Code

Often I come across what I like to call “Misplaced Code”, that is, code which should be refactored to a specific, independent concern rather than mistakenly being defined in an incorrect context.

For instance, consider the following example to get a better idea of what I mean:

Taking the above example into a broader context, it is quite common to see code such as this scattered throughout a codebase; particularly in the context of view concerns. At best this could become hard to maintain and, at worst, it will result in unexpected bugs down the road. In most cases (as in the above example) the actual code itself is not necessarily bad, however it is the context in which it is placed which is what I would like to highlight as it will almost certainly cause technical debt to some extent.

Considering the above example, should code such as this become redundantly implemented throughout a codebase it is quite easy to see how it can become a maintenance issue as, something as simple as a change to a hostname would require multiple refactorings. A much more appropriate solution would be to encapsulate this logic within a specific class whose purpose is to provide a facility from which this information can be determined. In this manner unnecessary redundancy would be eliminated (as well as risk) and valuable development time would be regained as the code would need only be tested and written once – in one place.

So again, using the above example, this could be refactored to a specific API and client code would leverage the API as in the following:

This may appear quite straightforward, however, I have seen examples (this one in particular) in numerous projects over the years and it is worth pointing out. Always take the context to which code is placed into consideration and you will reap the maintenance benefits in the long run.

Metadata API update for Flex 4

Back in September 2008 I published a post on Class Annotations in Flex, through which I provided an API allowing for a unified approach for working with AS3 metadata. Just recently I intended to use the Metadata API in a Flex 4 project when I noticed it no longer seemed to work as expected. After a bit of debugging, I realized the bug appeared to be related to changes in the result returned from describeType; specifically, the addition of the “__go_to_definition_help” metadata. Considering the Metadata API leverages describeType in order to introspect metadata definitions, this made sense.

I made some simple modifications to the API to work with the new metadata returned from describeType and this resolved the issue. While I was in the process I also refactored operations to return Vectors as opposed to ArrayCollections, as was the case in the original design; effectively optimizing the API for Flex 4 (as well as Flex SDK 3.4).

You can download the source/tests/swc/docs dist here; all of which will be available soon via SVN and a public Maven repository, more on that once it’s ready.

Guiding Design with Behavior Verification and Mock Objects

At some point every developer who has disciplined themselves in the ritualistic like art and science of Test Driven Development soon discovers that the collaborators on which a class under test depend introduce an additional layer of complexity to consider when writing your tests – and designing your APIs.

For example, consider a simple test against a class Car which has an instance of class Engine. Car implements a start method which, when invoked, calls the Engine object’s run method. The challenge here lies in testing the dependency Car has on Engine, specifically, how one verifies that an invocation of Car.start results in the Engine object’s run method being called.

There are two ways of testing the above example of Car, which in unit testing nomenclature is called the System Under Test (SUT), and it’s Engine instance which is Car's Depended-on Component (DOC). The most common approach is to define assertions based on the state of both the SUT and it’s DOC after being exercised. This style of testing is commonly referred to as State Verification, and is typically the approach most developers initially use when writing tests.

Using the above Car example, a typical State Verification test would be implemented as follows:

Figure 1. CarTest, State Verification.

From a requirements perspective and therefore a testing and implementation perspective as well, the expectation of calling start on Car is that it will A.) change it’s running state to true, and B.) invoke run on it’s Engine instance. As you can see in Figure 1, in order to test the start method on Car the Engine object must also be tested. In the example, using the State Verification style of testing, Car exposes the Engine instance in order to allow the state of Engine to be verified. This has lead to a less than ideal design as it breaks encapsulation and violates Principle of Least Knowledge. Obviously, a better design of Car.isStarted could be implemented such that it determines if it’s Engine instance is also in a running state; however, realistically, Engine.run will likely need to do more than just set its running state to true; conceivable, it could need to do much, much more. More importantly, while testing Car one should only be concerned with the state and behavior of Car – and not that of its dependencies. As such, it soon becomes apparent that what really needs to be tested with regards to Engine in Car.start is that Engine.run is invoked, and nothing more.

With this in mind, the implementation details of Engine.run are decidedly of less concern when testing Car; in fact, a “real” concrete implementation of Engine need not even exist in order to test Car; only the contract between Car and Engine should be of concern. Therefore, State Verification alone is not sufficient for testing Car.start as, at best, this approach unnecessarily requires a real Engine implementation or, at worst, as illustrated in Figure 1, can negatively guide design as it would require exposing the DOC in order to verify its state; effectively breaking encapsulation and unnecessarily complicating implementation. To reiterate an important point: State Verification requires an implementation of Engine and, assuming Test First is being followed (ideally, it is), the concern when testing Car should be focused exclusively on Car and it’s interactions with its DOC; not on their specific implementations. And this is where the second style of testing – Behavior Verification – plays an important role in TDD.

The Behavior Verification style of testing relies on the use of Mock Objects in order to test the expectations of an SUT; that is, that the expected methods are called on it’s DOC with the expected parameters. Behavior Verification is most useful where State Verification alone would otherwise negatively influence design by requiring the implementation of needless state if only for the purpose of providing a more convenient means of testing. For example, many times an object may not need to be stateful or the behavior of an object may not always require a change in it’s state after exercising the SUT. In such cases, Behavior Verification with Mock Objects will lead to a simpler, more cohesive design as it requires careful design considerations of the SUT and it’s interactions with its DOC. A rather natural side-effect of this is promoting the use of interfaces over implementations as well as maintaining encapsulation.

For testing with Behavior Verification in Flex, there are numerous Mock Object frameworks available, all of which are quite good in their own right and more or less provide different implementations of the same fundamental concepts. To name just a few, in no particular order, there are asMock, mockito-flex, mockolate and mock4as.

While any of the above Mock Testing Frameworks will do, for the sake of simplicity I will demonstrate re-writing the Cartest using Behavior Verification based on mock4as – if for nothing other than the fact that it requires implementing the actual Mock, which helps illustrate how everything comes together. Moreover, the goal of this essay is to help developers understand the design concepts surrounding TDD with Behavior Verification and Mock Objects by focusing on the basic design concepts; not the implementation specifics of any individual Mock Framework.

Figure 2. CarTest, Behavior Verification approach.

Let’s go through what has changed in CarTest now that it leverages Behavior Verification. First, Car's constructor has been refactored to require an Engine object, which now implements an IEngine interface, which is defined as follows.

Figure 3. IEngine interface.

Note Engine.isRunning is no longer tested, or even defined as, it is simply not needed when testing Car: only the call to Engine.run is to be verified in the context of calling Car.start. Since focus is exclusively on the SUT, only the interactions between Car and Engine are of importance and should be defined. The goal is to focus on the testing of the SUT and not be distracted with design or implementation details of it’s DOC outside of that which is needed by the SUT.

MockEngine provides the actual implementation of IEngine, and, as you may have guessed, is the actual Mock object implementation of IEngine. MockEngine simply serves to provide a means of verifing that when Car.start is exercised it successfully invokes Engine.run; effectively satisfiying the contract between Car and Engine. MockEngine is implemented as follows:

Figure 4. MockEngine implementation.

MockEngine extends org.mock4as.Mock from which it inherits all of the functionality needed to “Mock” an object, in this case, an IEngine implementation. You’ll notice that MockEngine.run does not implement any “real” functionality, but rather it simply invokes the inherited record method, passing in the method name to record for verification when called. This is the mechanism which allows a MockEngine instance to be verified once run is invoked.

CarTest has been refactored to now provide two distinct tests against Car.start. The first, testStartChangesState(), provides the State Verification test of Car; which tests the expected state of Car after being exercised. The second test, testStartInvokesEngineRun(), provides the actual Behavior Verification test which defines the expectations of the SUT and verification of those expectations on the DOC; that is, Behavior Verification tests are implemented such that they first define expectations, then exercise the SUT, and finally, verify that the expectations have been met. In effect, this verifies that the contract between an SUT and its DOC has been satisfied.

Breaking down the testStartInvokesEngineRun() test, it is quite easy to follow the steps used when writing a Behavior Verification test.

And that’s basically it. While much more can be accomplished with the many Mock Testing frameworks available for Flex, and plenty of information is available on the specifics of the subject, this essay quite necessarily aims to focus on the design benefits of testing with Behavior Verification; that is, the design considerations one must make while doing so.

With Behavior Verification and Mock Objects, design can be guided into existence based on necessity rather than pushed into existence based on implementation.

The example can be downloaded here.