You are viewing the Articles in the Refactoring Category

IIFE in ES6

TL;DR: In ES6, an IIFE is implemented as follows:


Unlike ES5, which is syntactically less opinionated, in ES6, when using an IIFE, parenthetical order matters.

For instance, in ES5 an IIFE could be written either as:

or

As can be seen in the above examples, in ES5 one could either wrap and invoke a function expression in parentheses, or wrap the function expression in parentheses and invoke the function outside of the parentheses.

However, in ES6, the former throws an exception, thus, one can not use:

But rather, the invocation must be made outside of the parentheses as follows:

As an aside for those who are curious, the syntax requirements are specific to ES6 and not a by-product of any particular transpilers (i.e. Babel, Traceur, etc.).

Polymer Behaviors in ES6

Being a typical aspect of Object Oriented Design, inheritance, and mixins, provide the means by which modular reuse patterns can be facilitated within a given system. Similarly, Polymer facilitates code reuse patterns by employing the notion of shared behaviors modules. Let’s take a quick look at how to leverage them in Polymer when using ES6 classes.

Implementing Behaviors

Implementing a Behavior is quite simple, just define an object within a block expression or an IIFE, and expose it via a namespace, or module loader of choice:

some-behaviors.js

Then, include the behavior in a corresponding .html document of the same name so as to allow the behavior to be imported by subsequent elements:

some-behavior.html

Extending Behaviors

After having defined and exposed a given Behavior, the Behavior can then be extended from element classes by defining a behaviors getter / setter as follows:

Once the behavior has been extended, simply import the behavior in the element’s template (or element bundle, etc.) and it is available to the template class:

Try it

Implementing Multiple Behaviors

Similar to individual behaviors, multiple behaviors can also be defined and extended:

first-behavior.js

second-behavior.js

In certain cases, I have found it helpful to group related behaviors together within a new behaviors (array) which bundles the individual behaviors together:

Note: As can be seen in the FourthBehavior, a behavior can also be implemented as an Array of previously defined behaviors.

Extending Multiple Behaviors

As with extending individual behaviors, multiple behaviors can also be extended using a behaviors getter / setter. However, when extending multiple behaviors in ES6, there are syntactic differences which one must take note of. Specifically, the behaviors getter must be implemented as follows:

Try it

And that’s basically all there is to it. Hopefully this article helped outline how Polymer Behaviors can easily be leveraged when implementing elements as ES6 classes. Enjoy.

BDD/TDD Mental Models

Recently, I shared a simple 8-step procedure with my team which outlines some of the general questions I tend to ask myself when writing tests, even if, perhaps, only subconsciously so.

While quite simple in form, and somewhat obvious in process, this procedure helps to develop a useful mental model from which practical steps can be applied to common testing scenarios; which, in turn, helps to provide clarity of general design considerations, while also helping to guide specific implementation decisions.

First things First

Arguably, the single most important aspect of testing (and software development in general, for that matter) is to acquire a solid understanding of the problem domain; for, without having (at minimum) a general understanding of the problem one is intending to solve, important details are likely to be omitted which would have otherwise been considered, and thus, covered by our tests. Spend time understanding exactly what problem your code is intended to solve, then begin thinking about what to test for. Understand the Problem.

Small Steps

Once confident that a good understanding of the problem has been reached, we can then get started on writing our initial tests. Consider this as a first pass, if you will, whereas we are only concerned with getting our tests to pass in the simplest (typically, least elegant) way possible. The initial implementation code can be as raw (and ugly), as needed, as this can (and will) be addressed after our initial tests are passing. If we are writing tests against code that does not yet exist, then we will first write the implementation code (the code that is being tested), directly within the test case itself. Once the test passes, we can then refactor the code out from our test and into the SUT (code we are testing). If the code already exists (we are writing new tests against existing code), we still need to understand and consider the implementation of the code itself, and not just simply write tests against it. Reviewing and critiquing existing code is an excellent way of gaining a quick understanding of a given system. Seize initial opportunities. Start off slow.

Clean Pass

Once we’ve written our initial tests and they are passing, we can then safely go back into our new or existing implementation code and refactor it to our hearts content. If we break something, our tests will let us know. After all, one of the most rewarding aspect afforded by unit testing is the ability to refactor our code freely with little worry or concern that we will unintentionally break something without knowing. If something breaks, are tests will inform us. Tomorrow never comes in Software Development. Clean up as you go along.

Negative Tests

The most obvious tests to write are those which are against the things we are expecting the code to do. But what about if the code is used incorrectly? What if an argument is required and it is not provided, or it is of an invalid type? Does our code throw an exception? Does it simply return undefined? What should it do? These are all questions we should be asking ourselves once our expected test cases are passing. After that, we need to start thinking about ways to have our code appropriately respond to negative cases – we don’t want the entire app to become in an unpredictable state just because an uncaught exception was thrown due to some simple string formatting argument not being passed, etc.. Test the exceptional; Test the unexpected.

Stateless Tests

One of the most important considerations to make both during and especially after all of the above points have been considered, is the statelessness of the system while being tested. Always ask yourself, “Am I resetting the state of all my test’s dependencies back to an expected state?”. This is perhaps one of the most commonly overlooked, yet crucially important consideration to make. A good example illustrating why this is important can be found in the common scenario of a test that invokes a method which triggers an event. If any previously executed tests which handle the event have not been properly tore down (e.g. afterEach), the object will still exist; and thus handle the event. This typically results in a change in state, more often than not causing an unexpected error to be thrown. Always use set-ups (e.g. beforeEach) to configure your tests environment, fixtures, any dependencies your test requires to operate properly. If you are setting values on anything outside the context of your tests; always use mocks, stubs and tear-down methods (e.g. afterEach) to reset them back to an expected state. Remember, while your tests are not part of your applications source, they are certainly part of your projects source; this, in effect, requires them to be viewed as first class citizens; subject to the same quality design and implementation as project source. Tests will need to evolve and be continually maintained. Treat the test environment with respect; ensure you return it in a predictable state. Leave it the way you found it.

Continued Improvement

While the above description of Stateless Tests clearly states that the test environment should remain stateless, and thus “remain as we found it” prior to our tests, our actual implementations code should always be improved when improvements can be made; hence, The Broken Windows Theory is one we should all strive to live by. This especially holds true in the context of writing tests/specs against existing code. If the code is not up to par in any way – fix it. Ask yourself: “How easy was it for me to understand what this code does?”. “Is it documented in a meaningful way?”. “Would it be easier to understand if I added some quick examples?” (Often, adding examples is simple a matter of pointing to, or annotating the source with the test cases themselves). We can have the greatest, most elegant framework and foundation on which to build the greatest apps in the world, but if we allow ourselves to let our code quality degrade, our apps will gradually decay into chaos. Set a higher standard, and live by it. Leave the source better than you found it.

Meaningful Tests

It is quite easy to get caught up in the perceived quality of a system’s tests simply by measuring it against general Code Coverage metrics. This is a subject I have spoken to at length many times. While code coverage certainly has it’s purpose, and can be helpful, it is often not very reflective of reality. Judge your tests not by the number of test cases or units tested, but rather, judge based on the meaningfulness of each specific test case itself. Ask yourself “What is the overall value of this test?”, “Am I testing the obvious?” (such as a simple getter/setter). Focus on what’s important, test whats of most value first. This will afford one the satisfaction of knowing that if time constraints or something comes up which requires shifting focus to something else, the most important test cases are covered. Focus on what’s important.

Know when you are done

It is quite possible for one to go on refactoring beyond what is essential. As such, it’s important to know when you’re done. Some questions to ask yourself are: “Does the code do what it needs to do?”, “Is the code clean and understandable, performant, efficient, etc.?”. “Does it have adequate coverage?” If these questions can be answered in the affirmative, then you’re most likely done. Many times, it’s tempting to continually refactor; as the more one refactors, the more opportunities for further abstractions begin to arise. When confident that your most important objectives have been met, you’re done. No when to stop.

Concluding Thoughts

It is important to note that the above considerations are by no means exhaustive – and this is intentionally so; as each point is specifically intended to provide just enough guidance to sufficiently ask the right questions, and thus solve problems in a pragmatic manner.

Over the years, I have found that it can be particularly helpful for developers new to a specific domain, or new to TDD/BDD in general, to consider the steps listed above from time to time in a general, summarized form. After doing this regularly, it becomes second nature; engrained in one’s daily development process.

  1. Understand the Problem
  2. Start off slow
  3. Clean up as you go along
  4. Test the unexpected
  5. Leave the test environment the way you found it
  6. Leave the source better than you found it
  7. Focus on what’s important.
  8. No when to stop

Quick Tip: Backbone Collection Validation

Often times I find the native Backbone Collection implementation to be lacking when compared to it’s Backbone.Model counterpart. In particular, Collections generally lack in terms of direct integration with a backend persistence layer, as well as the ability to validate models within the context of the collection as a whole.

Fortunately, such short comings can easily be circumvented due to the extensibility of Backbone’s design as a generalized framework. In fact, throughout my experience utilizing Backbone, I can assert that there has yet to be a problem I have come across which I was unable to easily solve by leveraging one of the many Backbone extensions, or, more often than not, by simply overriding Backbone’s default implementation of a given API.

Validating Collections

Perhaps a common use-case for validating a collection of Models can be found when implementing editors which allow for adding multiple entries of a given form section (implemented as separate Views), whereby each section has a one-to-one correlation with an individual model. Rather than invoke validation on models from each individual view, and manage which model’s are in an invalid state from the context of a composite view, it can be quite useful to simply validate the collection from the composite view which, in turn, results in all models being validated and their associated views updating accordingly.

Assuming live validation is not being utilized, validation is likely to occur when the user submits the form. As such, it becomes necessary to validate each model after their views have updated them as a result of the form being submitted. This can be achieved quite easily by implementing an isValid method on the collection which simply invokes isValid on each model within the collection (or optionally, against specific models within the collection). A basic isValid implementation for a Collection is as follows:

As can be seen in the above example, the Collection’s isValid method simply invokes isValid on it’s models. This causes each model to be re-validated which, in turn, results in any invalid models triggering their corresponding invalidation events, allowing for views to automatically display validation indicators, messages, and the like; particularly when leveraging the Backbone.Validation Plugin.

This example serves well to demonstrate that, while Backbone may not provide everything one could ever ask for “out of the box”, it does provide a design which affords developers the ability to quickly, easily, and effectively extend the native framework as needed.

Function Modules in RequireJS

Having leveraged RequireJS as part of my preferred client-side web stack for some time now, I find it rather surprising how often I recall various features from the API docs which I have yet to use, and how they may apply to a specific solution I am implementing.

One such feature is the ability to define a module as a function. While at first this may seem a rather basic feature, and indeed it is, there are quite a few practical purposes in which returning a function as a module definition can prove useful.

Function Modules

As one may expect, returning a function as a Module in RequireJS is rather straight-forward. For example, a simple random function can be defined as a module as follows:

Then, the function module can simply be invoked as follows:

Function Modules as Factories

Perhaps a more practical example of implementing a function module is in the context of a Factory Method.

For instance, a function module can be used as a convenient means of creating a specific type of object on a clients behalf based on certain conditions.

Take (an intentionally simple) example of a function module which, given a specific role type, returns a corresponding view implementation (in this example, a Backbone View) for an Editor feature:

Client code can then simply invoke the factory in order to retrieve the appropriate Editor view implementation based on the specified role type:

Defining modules as functions which serve as factory methods can help simplify client code implementations, as the responsibility of determining the type of object to create, configure, etc., can be delegated to a dedicated object; thus allowing for simpler designs which better facilitate code reuse, testing, and maintenance.

As a general rule of thumb, I typically reserve implementing modules as functions for cases in which a package level method would be appropriate, or for factory implementations. However, there are other scenarios in which function modules may apply, and so they are certainly worth noting.

You can fork the above example here.

Fluent APIs and Method Chaining

Of the vast catalog of Design Patterns available at our disposal, often times I find it is the simpler, less prominent patterns which are used quite frequently, yet recieve much less recognition; a good example of which being the Method Chaining Pattern.

Method Chaining

The Method Chaining Pattern, as I have come to appreciate it over the years, represents a means of facilitating expressiveness and fluency when used articulately, and mere convenience in it’s less sophisticated use-cases.

Design Considerations

When considering Method Chaining, one should take heed not to simply use the pattern as merely syntactic sugar from which writing fewer lines of code can be achieved; but rather, Method Chaining should be used, perhaps more appropriately, as a means of implementing Fluent APIs which, in turn, allow for writing more concise expressions. By design, such expressions can be written, and thus read, in much the same way as natural language, though they need not be the same from a truly lexical perspective.

The resulting terseness afforded by Method Chaining, while convenient, is in most cases not in-of-itself a reason alone for leveraging the pattern.

Implementation

Method Chaining, when considered purely from an implementation perspective, is perhaps the simplest of all design patterns. It’s basic mandate simply prescribes returning a reference to the object on which a method is being called (in most languages, JavaScript in particular, the this pointer).

Consider the following (intentionally contrived) example:

As can be seen, implementing Method Chaining requires nothing more than simply having methods return a reference to this.

API Simplicity

Method Chaining is typically used when breaking from traditional Command Query Seperation (CQS) principles. The most common example being the merging of both getters (Queries) and setters (Commands). I especially like this technique, as, aside from being very easy to implement, it allows for an API to be used in a more contextual manner from the developers perspective as oppossed to that specified by the API designer’s preconceptions of how the API will be used. For example:

As can be seen, the message method serves as both a getter and setter, allowing users of the API to determine how the method should be invoked based on context, as well as affording developers the convenience of needing only to remember a single method name. This technique is used quite heavily in many JavaScript libraries and has undoubtedly contributed to their success.

We could further expand on this concept by determining a method’s invocation context based on the arguments provided, or the types of specific arguments, thus, in turn, merging various similar methods based on a particular context.

An important design recommendation to consider is that if you are writing an API which violates CQS (which is quite fine IMHO), as always, API consistency is important, thus all getters and setters should be implemented in the same manner.

Fluency

As was mentioned, in most cases, Method Chaining is leveraged to facilitate APIs which are intended to be used fluently (e.g. an Internal DSL). Such implementations typically provide methods which, by themselves, may have little meaning; however, when combined, allow for writing expressions which are self-descibing and make logical sense to users of the API.

For example, consider the way one might describe a Calendrical Event:

Vacation, begins June 21st, ends July 5th, recurs Yearly.

We can easily implement a Fluent API such that the above grammar can be emulated in code as follows:

The same methods can also be chained in different combinations, yet yield the same value:

Given the above example, we could further improve on the fluency of the implementation by adding intermediate methods which can, by themselves, simply serve to aid in readability, or, provide an alternate modifier for chaining:

When implementing Fluent APIs, we can design such that different logical chaining combinations can yield the same result, thus affording users of the API the convenience of determining the most appropriate expressions based on context or personal preference, even grammatically so. Illogical chaining combinations can be handled by either throwing an exception, or they can simply be ignored based on the context of a preceding invocation – though, of course, one should aim to avoid designs which allow for illogical chaining.

The Ubiquitous Example – jQuery

While Method Chaining and Fluent APIs, as with most design patterns, are language agnostic, in the JavaScript world perhaps the most well known implementation is the jQuery API; for example:

In addition to jQuery, there are numerous additional JavaScript Method Chaining and Fluent APIs of note, Jasmine in particular has a very expressive API which aligns excellently with it’s design goals. The various libraries which implement the Promises/A spec also provide very clear and concise Fluent APIs.

Concluding Thoughts

Over the years I have leveraged Method Chaining to facilitate the design of Fluent APIs for various use-cases. The two patterns, when combined, can be especially useful when designing Internal DSLs; either third-party libraries, or APIs specific to a particular business domain.