You are viewing the Articles in the Object Oriented Design Category

Some Useful Tips to Keep in Mind

Throughout my career I have always been drawn to books which provide a practical way of thinking about software. Books of this nature tend to have an emphasis on fundamental principles which apply to all software engineering disciplines, and form much of the basis of the Agile methodologies many of us have come to appreciate.

Often, I find myself going back to the seminal text; The Pragmatic Programmer as, it provides a great resource for some important things I like to keep in mind from day to day. And so, I just wanted to take a moment to share some of the best tips from the book which I have found to be particularly useful and inspiring.

Care About Your Craft

Why spend your life developing software unless you care about doing it well?

Provide Options, Don’t Make Lame Excuses

Instead of excuses, provide options. Don’t say it can’t be done; explain what can be done.

Critically Analyze What You Read and Hear

Don’t be swayed by vendors, media hype, or dogma. Analyze information in terms of you and your project.

Design with Contracts

Use contracts to document and verify that code does no more and no less than it claims to do.

Refactor Early, Refactor Often

Just as you might weed and rearrange a garden, rewrite, rework, and re-architect code when it needs it. Fix the root of the problem.

Costly Tools Don’t Produce Better Designs

Beware of vendor hype, industry dogma, and the aura of the price tag. Judge tools on their merits.

Start When You’re Ready

You’ve been building experience all your life. Don’t ignore niggling doubts.

Don’t Be a Slave to Formal Methods

Don’t blindly adopt any technique without putting it into the context of your development practices and capabilities.

It’s Both What You Say and the Way You Say It

There’s no point in having great ideas if you don’t communicate them effectively.

You Can’t Write Perfect Software

Software can’t be perfect. Protect your code and users from the inevitable errors.

Build Documentation In, Don’t Bolt It On

Documentation created separately from code is less likely to be correct and up to date.

Put Abstractions in Code, Details in Metadata

Program for the general case, and put the specifics outside the compiled code base.

Work with a User to Think Like a User

It’s the best way to gain insight into how the system will really be used.

Program Close to the Problem Domain

Design and code in your user’s language.

Use a Project Glossary

Create and maintain a single source of all the specific terms and vocabulary for a project.

Be a Catalyst for Change

You can’t force change on people. Instead, show them how the future might be and help them participate in creating it.

DRY – Don’t Repeat Yourself

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

Eliminate Effects Between Unrelated Things

Design components that are self-contained, independent, and have a single, well-defined purpose.

Iterate the Schedule with the Code

Use experience you gain as you implement to refine the project time scales.

Use the Power of Command Shells

Use the shell when graphical user interfaces don’t cut it.

Don’t Panic When Debugging

Take a deep breath and THINK! about what could be causing the bug.

Don’t Assume It – Prove It

Prove your assumptions in the actual environment—with real data and boundary conditions.

Write Code That Writes Code

Code generators increase your productivity and help avoid duplication.

Test Your Software, or Your Users Will

Test ruthlessly. Don’t make your users find bugs for you.

Don’t Gather Requirements—Dig for Them

Requirements rarely lie on the surface. They’re buried deep beneath layers of assumptions, misconceptions, and politics.

Abstractions Live Longer than Details

Invest in the abstraction, not the implementation. Abstractions can survive the barrage of changes from different implementations and new technologies.

Don’t Think Outside the Box—Find the Box

When faced with an impossible problem, identify the real constraints. Ask yourself: “Does it have to be done this way? Does it have to be done at all?”;

Some Things Are Better Done than Described

Don’t fall into the specification spiral—at some point you need to start coding.

Don’t Use Manual Procedures

A shell script or batch file will execute the same instructions, in the same order, time after time.

Test State Coverage, Not Code Coverage

Identify and test significant program states. Just testing lines of code isn’t enough.

Gently Exceed Your Users’ Expectations

Come to understand your users’ expectations, then deliver just that little bit more.

Don’t Live with Broken Windows

Fix bad designs, wrong decisions, and poor code when you see them.

Remember the Big Picture

Don’t get so engrossed in the details that you forget to check what’s happening around you.

Make It Easy to Reuse

If it’s easy to reuse, people will. Create an environment that supports reuse.

There Are No Final Decisions

No decision is cast in stone. Instead, consider each as being written in the sand at the beach, and plan for change.

Estimate to Avoid Surprises

Estimate before you start. You’ll spot potential problems up front.

Use a Single Editor Well

The editor should be an extension of your hand; make sure your editor is configurable, extensible, and programmable.

Fix the Problem, Not the Blame

It doesn’t really matter whether the bug is your fault or someone else’s—it is still your problem, and it still needs to be fixed.

“select” Isn’t Broken

It is rare to find a bug in the OS or the compiler, or even a third-party product or library. The bug is most likely in the application.

Learn a Text Manipulation Language

You spend a large part of each day working with text. Why not have the computer do some of it for you?

Use Exceptions for Exceptional Problems

Exceptions can suffer from all the readability and maintainability problems of classic spaghetti code. Reserve exceptions for exceptional things.

Minimize Coupling Between Modules

Avoid coupling by writing shy” code and applying the Law of Demeter.

Design Using Services

Design in terms of services: independent, concurrent objects behind well-defined, consistent interfaces.

Don’t Program by Coincidence

Rely only on reliable things. Beware of accidental complexity, and don’t confuse a happy coincidence with a purposeful plan.

Organize Teams Around Functionality

Don’t separate designers from coders, testers from data modelers. Build teams the way you build code.

Test Early. Test Often. Test Automatically.

Tests that run with every build are much more effective than test plans that sit on a shelf.

Find Bugs Once

Once a human tester finds a bug, it should be the last time a human tester finds that bug. Automatic tests should check for it from then on.

Sign Your Work

Craftsmen of an earlier age were proud to sign their work. You should be, too.

It is my hope that you will find some of these tips helpful and, if so, I suggest keeping those which resonate with you (as well as some of your own) someplace visible for reference.

Guiding Design with Behavior Verification and Mock Objects

At some point every developer who has disciplined themselves in the ritualistic like art and science of Test Driven Development soon discovers that the collaborators on which a class under test depend introduce an additional layer of complexity to consider when writing your tests – and designing your APIs.

For example, consider a simple test against a class Car which has an instance of class Engine. Car implements a start method which, when invoked, calls the Engine object’s run method. The challenge here lies in testing the dependency Car has on Engine, specifically, how one verifies that an invocation of Car.start results in the Engine object’s run method being called.

There are two ways of testing the above example of Car, which in unit testing nomenclature is called the System Under Test (SUT), and it’s Engine instance which is Car's Depended-on Component (DOC). The most common approach is to define assertions based on the state of both the SUT and it’s DOC after being exercised. This style of testing is commonly referred to as State Verification, and is typically the approach most developers initially use when writing tests.

Using the above Car example, a typical State Verification test would be implemented as follows:

Figure 1. CarTest, State Verification.

From a requirements perspective and therefore a testing and implementation perspective as well, the expectation of calling start on Car is that it will A.) change it’s running state to true, and B.) invoke run on it’s Engine instance. As you can see in Figure 1, in order to test the start method on Car the Engine object must also be tested. In the example, using the State Verification style of testing, Car exposes the Engine instance in order to allow the state of Engine to be verified. This has lead to a less than ideal design as it breaks encapsulation and violates Principle of Least Knowledge. Obviously, a better design of Car.isStarted could be implemented such that it determines if it’s Engine instance is also in a running state; however, realistically, Engine.run will likely need to do more than just set its running state to true; conceivable, it could need to do much, much more. More importantly, while testing Car one should only be concerned with the state and behavior of Car – and not that of its dependencies. As such, it soon becomes apparent that what really needs to be tested with regards to Engine in Car.start is that Engine.run is invoked, and nothing more.

With this in mind, the implementation details of Engine.run are decidedly of less concern when testing Car; in fact, a “real” concrete implementation of Engine need not even exist in order to test Car; only the contract between Car and Engine should be of concern. Therefore, State Verification alone is not sufficient for testing Car.start as, at best, this approach unnecessarily requires a real Engine implementation or, at worst, as illustrated in Figure 1, can negatively guide design as it would require exposing the DOC in order to verify its state; effectively breaking encapsulation and unnecessarily complicating implementation. To reiterate an important point: State Verification requires an implementation of Engine and, assuming Test First is being followed (ideally, it is), the concern when testing Car should be focused exclusively on Car and it’s interactions with its DOC; not on their specific implementations. And this is where the second style of testing – Behavior Verification – plays an important role in TDD.

The Behavior Verification style of testing relies on the use of Mock Objects in order to test the expectations of an SUT; that is, that the expected methods are called on it’s DOC with the expected parameters. Behavior Verification is most useful where State Verification alone would otherwise negatively influence design by requiring the implementation of needless state if only for the purpose of providing a more convenient means of testing. For example, many times an object may not need to be stateful or the behavior of an object may not always require a change in it’s state after exercising the SUT. In such cases, Behavior Verification with Mock Objects will lead to a simpler, more cohesive design as it requires careful design considerations of the SUT and it’s interactions with its DOC. A rather natural side-effect of this is promoting the use of interfaces over implementations as well as maintaining encapsulation.

For testing with Behavior Verification in Flex, there are numerous Mock Object frameworks available, all of which are quite good in their own right and more or less provide different implementations of the same fundamental concepts. To name just a few, in no particular order, there are asMock, mockito-flex, mockolate and mock4as.

While any of the above Mock Testing Frameworks will do, for the sake of simplicity I will demonstrate re-writing the Cartest using Behavior Verification based on mock4as – if for nothing other than the fact that it requires implementing the actual Mock, which helps illustrate how everything comes together. Moreover, the goal of this essay is to help developers understand the design concepts surrounding TDD with Behavior Verification and Mock Objects by focusing on the basic design concepts; not the implementation specifics of any individual Mock Framework.

Figure 2. CarTest, Behavior Verification approach.

Let’s go through what has changed in CarTest now that it leverages Behavior Verification. First, Car's constructor has been refactored to require an Engine object, which now implements an IEngine interface, which is defined as follows.

Figure 3. IEngine interface.

Note Engine.isRunning is no longer tested, or even defined as, it is simply not needed when testing Car: only the call to Engine.run is to be verified in the context of calling Car.start. Since focus is exclusively on the SUT, only the interactions between Car and Engine are of importance and should be defined. The goal is to focus on the testing of the SUT and not be distracted with design or implementation details of it’s DOC outside of that which is needed by the SUT.

MockEngine provides the actual implementation of IEngine, and, as you may have guessed, is the actual Mock object implementation of IEngine. MockEngine simply serves to provide a means of verifing that when Car.start is exercised it successfully invokes Engine.run; effectively satisfiying the contract between Car and Engine. MockEngine is implemented as follows:

Figure 4. MockEngine implementation.

MockEngine extends org.mock4as.Mock from which it inherits all of the functionality needed to “Mock” an object, in this case, an IEngine implementation. You’ll notice that MockEngine.run does not implement any “real” functionality, but rather it simply invokes the inherited record method, passing in the method name to record for verification when called. This is the mechanism which allows a MockEngine instance to be verified once run is invoked.

CarTest has been refactored to now provide two distinct tests against Car.start. The first, testStartChangesState(), provides the State Verification test of Car; which tests the expected state of Car after being exercised. The second test, testStartInvokesEngineRun(), provides the actual Behavior Verification test which defines the expectations of the SUT and verification of those expectations on the DOC; that is, Behavior Verification tests are implemented such that they first define expectations, then exercise the SUT, and finally, verify that the expectations have been met. In effect, this verifies that the contract between an SUT and its DOC has been satisfied.

Breaking down the testStartInvokesEngineRun() test, it is quite easy to follow the steps used when writing a Behavior Verification test.

And that’s basically it. While much more can be accomplished with the many Mock Testing frameworks available for Flex, and plenty of information is available on the specifics of the subject, this essay quite necessarily aims to focus on the design benefits of testing with Behavior Verification; that is, the design considerations one must make while doing so.

With Behavior Verification and Mock Objects, design can be guided into existence based on necessity rather than pushed into existence based on implementation.

The example can be downloaded here.

Vector Iterator for Flex

One of the many welcome additions to the Flex 3.4 SDK is the inclusion of the Vector class. Vectors in particular are especially welcome as they provide compile time type safety over what would otherwise not be available when implementing custom solutions, such as a typed collection.

Essentially, Vectors are just typed Arrays. And while not as robust or powerful, Vectors are similar to Generics in C# and Java. When it is known at design time that a collection will only ever need to work with a single type, Vectors can be utilized to provide type safety and also to allow for significant performance gains over using other collection types in Flex.

I recently wanted to convert quite a few typed Array implementations to Vectors, however, the Arrays were being traversed with an Iterator. In order to reduce the amount of client code which needed to be refactored I simply implemented a Vector specific Iterator implementation.

If you are familiar with Iterator Pattern in general, and the Iterator interface in particular, then usage will prove to be very straight forward. You can use the Vector Iterator to perform standard iterations over a Vector. Below is an example of a typical client implementation:

Using an Iterator with a Vector ensures only a linear search can be performed, which proves useful with Vectors as they are dense Arrays. However, one consideration that must be made when using an Iterator with a Vector is that you loose type safety when accessing items in the Vector via iterator.next(). It is because of this I would suggest only using Iterator’s with Vectors to support backwards compatibility when refactoring existing Arrays which are being used with existing Iterators.

The VectorIterator and it’s associated test are available below:
VectorIterator
VectorIteratorTest

Simple RPC Instrumentation in Flex

On occasion developers may find a need to quickly measure the time it takes for a request to a remote service to return a response back to the client without the need to employ an automated testing tool to perform the instrumentation. This information can prove quite valuable for performing application diagnostics on the client and, when measured in terms of code execution, monitoring at the execution level will always be a bit more precise than that which can be measured by using a Network proxy alone, such as Charles or Fiddler, etc.

Obviously there are numerous solutions which can be implemented to monitor the elapsed time of a service invocation, however it was my goal to provide a unified solution which could easily be implemented into existing client code without significant refactorings being required.

In order to achieve this I first needed to consider what the typical implementation of a service invocation is in order to isolate the
commonality. From there it is only a matter of determining a solution that meets the objective in the most non intrusive manner possible.

To begin let us consider what a “typical” service invocation might look like for the three most common services available in the Flex Framework; HTTPService, RemoteObject and WebService.

Based on the 3 above implementations we can deduce that the common API used when performing a service invocation is AsyncToken. So to provide a unified solution for all three common Services we could either extend AsyncToken or provider an API which wraps AsyncToken. For my needs I chose to implement an API which simply monitors an AsyncToken from which the duration of an invocation can be determined, thus I wrote an RPCDiagnostics API which can be “plugged” into an AsyncToken client implementation.

RPCDiagnostics provides basic performance analysis of a Remote Procedure Call by providing a message which displays information about the operation duration via a standard trace call. In addition, an event listener of type RPCDiagnosticsEvent can be added to facilitate custom diagnostics and Logging.

RPCDiagnostics can easily be implemented as an addition to an existing AsyncToken or in place of an AsyncToken. The following examples demonstrate both implementations.

Implementing RPCDiagnostics onto an existing AsyncToken:

Implementing RPCDiagnostics in place of an AsyncToken:

Implementing a listener to an RPCDiagnostics instance:

The RPCDiagnostics API and dependencies can be downloaded via the Open Source AS3 APIs page or from the below links:

RPCDiagnostics
RPCDiagnosticsEvent
Execution

Design Considerations: Naming Conventions

Intuitive naming conventions are perhaps one of the most important factors in providing a scalable software system. They are essential to ensuring an Object Oriented System can easily be understood, and thus modified by all members of a team regardless of their tenure within the organization or individual experience level.

When classes, interfaces, methods, properties, identifiers, events and the like fail to follow logical, consistent and intuitive naming conventions the resulting software becomes significantly more complex to understand, follow and maintain. As such this makes changes much more challenging than they would have been had better naming been considered originally. Of equal concern is the inevitability that poor naming will lead to redundant code being scattered throughout a project as when the intent of code is not clearly conveyed with as little thought as possible developers tend to re-implement existing functionality when the needed API cannot easily be located or identified.

Code is typically read many, many more times than it is written. With this in mind it is important to understand that the goal of good naming is to be as clear and concise as possible so that a reader of the code can easily determine the codes intent and purpose; just by reading it.

Teams should collectively define a set of standard naming conventions which align well with the typical conventions found in their language of choice. In doing so this will help to avoid arbitrary naming conventions which often result in code that is significantly harder to determine intent, and thus maintain. Of equal importance is the need for various teams from within the same engineering department to standardize on domain specific terms which align with the non-technical terms used by business stakeholders. Together this will help to develop a shared lexicon between business owners and engineers, and allow for simplified analysis of requirements etc.

Ideally, code should follow the PIE Principle (Program, Intently and expressively) – that is, code should clearly convey purpose and intent. In doing so the ability to maintain a software application over time becomes significantly easier and limits the possibility of introducing potential risk to project deliverables.

In short, conventions are very important regardless of a teams size; beit a large collaborative team environment, or a single developer who only deals with his own code. Consistency and conventions are a key aspect to ensuring code quality.

Perfectionism, Prudence and Progress

Yesterday there was an interesting article on InsideRIA titled: “How much is too much?”. This is a great topic, one which at times I have questioned myself.

Personally, I never take the “easy way out”, preferring to do things the “hard way”, so to speak. At times the benefits in doing things according to best practices, standards and conventions (a.k.a. “the right way”) may not always be immediately obvious. However over the years experience has taught me that in time the benefits always reveal themselves and the pros certainly outweigh the cons.

When given a good amount of forethought to a decision, a design an implementation and so forth a team almost certainly is afforded the ability to continue development feasibly and in a less challenging manner (as opposed to dealing with endless maintenance challenges). When things are done quickly with little regard for anything other than getting working code out the result is always failure at some level, most commonly the maintainability of a product.

With all this in mind it is important to understand that at the end of the day our development efforts, for better or for worse, are simply a means to an end for a specific business need. Therefore just as writing “quick and dirty” code has a negative impact on the business, so too does being a complete perfectionist. Admittedly, this used to be a challenge for me as I would tend to need designs, tests and code to “feel right” for them to be considered production ready, which typically resulted in me working many extra hours on my own time. This in itself is not necessarily a bad thing, but could rather be considered a labor of passion.

Ultimately the goal should be to find just the right balance of perfectionism, prudence and progress, providing necessary trade-offs where appropriate.