You are viewing the Articles in the Software Engineering Category

Synchronizing package.json with yarn.lock


After having used Yarn almost exclusively for the past couple of years, there has been one nagging issue which seemed to continually crop up. Specifically, the inability to have a project’s package.json dependency versions kept in sync with the actual versions in yarn.lock. And so, while running yarn upgrade results in all packages being updated to the latest versions (as specified via the given semver ranges), the versions defined in package.json are not updated to reflect that which they have been upgraded to.

This can prove problematic as, one can not easily discern a project’s dependency versions by simply viewing their respective values in package.json.

In particular, as part of process, after each production release I have scripts which are executed to automate the process of updating all project dependencies to their respective latest Minor and Patch revisions prior to opening master for new development. While the scripts manage the updates and committals internally, each project’s package.json would remain unmodified, making it challenging to determine which packages have been upgraded, and which have not. Having to automate or manually inspect the yarn.lock files is less than ideal, and quite cumbersome to say the least.

Fortunately, like most things in the Javascript world, there is a package for this; syncyarnlock, which provides exactly what one would need to ensure that the dependency versions defined in package.json are kept in sync with the project’s yarn.lock.

Simply install syncyarnlock, and execute with the options applicable to your needs.

For example, to sync a project’s package.json with the project’s yarn.lock, and have the ranges remain intact while updating the versions to reflect what will actually be installed, simply run: syncyarnlock -s -k.

This will result in the dependency ranges being preserved, while also updating their versions to reflect the versions that will actually be installed.

And with that, we have proper syncing. A definite time-saver!

Separation of Concerns: propTypes and Immutable.js

When considering the separation of concerns between Container and Presentational Components (stateful / stateless components), I find it useful to leverage the core concepts of these patterns in order to define a clear boundary between where Immutable data types are used, and where raw JavaScript types are referenced exclusively.

By having a clear separation which compartmentalizes where Immutable types are used and where they are not, team members are afforded the ability to easily determine a components propTypes; as, without having a clear cut-off point, one must give thought as to if a prop passed down to a component will be an Immutable object, or not.

It’s no stretch of the imagination to see how this can quickly lead to code which becomes much harder to maintain than it needs to be. As such, the Container / Presentational Component pattern provides a rather natural boundary for separating these concerns.

Unfortunately; however, while such a boundary may seem rather obvious, it may not always be clearly defined, and this tends to lead to overly complex propType declarations.

For instance, on a number of occasions I’ve seen propTypes declared similar to the following:

Given the above example, it’s obvious that it was unclear to the original implementor (or current maintainer) of SomePresentationalComponent as to what the expected propTypes will ultimately be. In certain cases, it appears someList could be of type array; whereas, in other cases, it could be of type object (e.g. Immutable.List). Likewise, in some cases someItem could be an object, whereas in others it could be an Immutable.Map.

As you can see, this is obviously problematic and indeed a very good candidate for a bug (not to mention, a maintenance headache indeed).

Moreover, it results in all sorts of unnecessary type check permutations before accessing properties. For example, just to check the length of the list:

Likewise, just to get the id of someItem:

At best, this is far from ideal, to say the very least …

Now, obviously the developer could simply define a single propType and refactor Containers which are passing an invalid type; however, it may not always be clear what the type should be, if say, the component is being used by multiple applications to which the developer does not have access, and some of those applications are not using Immutable.js, in which case, it would be best to simply disallow Immutable from the component all together and have consumers of the component update their Containers. In any event, it’s symptomatic of a team not having a clear understanding of what kind of components work exclusively with Immutable data types, and which do not.

Solutions

Fortunately, as one might imagine, there are a couple of very simple solutions to this problem:

  1. Only use Immutable types throughout the entire application.
  2. Segment which components use Immutable types, and which do not.

Now, in some cases the argument for Option #1 may very well be a valid one; however, I find Option #2 to be much more feasible (and flexible) as, it helps to ensure Presentational component are kept pure, and that means only using JavaScript types. For my purposes, this is especially important as I have to maintain a shared library which must limit dependencies as much as possible; and some projects are using Immutable, Redux, etc., and some are not. As always – consider the context.

Pros

By having an internal design contract (or convention) which mandates that Container components are only ever to work with Immutable types and, Presentational components are only ever to be passed JavaScript data types, it becomes much clearer to team members where the boundary is defined, and thus, much easier to maintain a large application over time.

Furthermore, it allows less experienced developers to gradually become acclimated with the React Ecosystem by assigning them tasks focused on presentational features. This can be very useful as it only requires knowledge of core concepts without being inundated with additional libraries and APIs. This approach also affords team members with more experience to focus on the more complex portions of the application (application logic, reducers, containers, etc.).

In addition, destructuring, …rest parameters and related ES6 features can be used much more extensively to simplify implementation when using JavaScript types exclusively, helping to ensure Presentational components are kept intentionally “dumb”. Not to mention, in doing so, testing becomes considerably less complex when working with native JavaScript types – and this is equally important when helping newer developers become productive while still getting up to speed.

And, while not always likely, by reducing our dependency on Immutable.js, we position ourselves for a much more easier migration path in the event we decide to swap out Immutable for another library in the future.

Cons

Arguably, one could be justified in the assertion that only Immutable Data types should be used by both Container and Presentational components (Option #1), and indeed that would be a fair argument if you will be calling toJS() frequently when passing props down to Presentational Components (as there is obviously an inherent expense in doing so).

That being said, there is no reason why one would need to call toJS when passing props to Presentational Components as the Immutable API can be utilized to reduce the given props before being passed down to child components. In such cases, a Higher Order Component can be defined for doing either, which can simplify implementation considerably.

Summary

Like most design decisions, there is rarely a one-size-fits-all approach that perfectly solves any given problem, and what ultimately makes sense in one context, may not always be appropriate in another. However, in the context of when and where Immutable types are used, in most cases it is fair to say there should always be a clear boundary defined, regardless of where that boundary must be.

BDD/TDD Mental Models

Recently, I shared a simple 8-step procedure with my team which outlines some of the general questions I tend to ask myself when writing tests, even if, perhaps, only subconsciously so.

While quite simple in form, and somewhat obvious in process, this procedure helps to develop a useful mental model from which practical steps can be applied to common testing scenarios; which, in turn, helps to provide clarity of general design considerations, while also helping to guide specific implementation decisions.

First things First

Arguably, the single most important aspect of testing (and software development in general, for that matter) is to acquire a solid understanding of the problem domain; for, without having (at minimum) a general understanding of the problem one is intending to solve, important details are likely to be omitted which would have otherwise been considered, and thus, covered by our tests. Spend time understanding exactly what problem your code is intended to solve, then begin thinking about what to test for. Understand the Problem.

Small Steps

Once confident that a good understanding of the problem has been reached, we can then get started on writing our initial tests. Consider this as a first pass, if you will, whereas we are only concerned with getting our tests to pass in the simplest (typically, least elegant) way possible. The initial implementation code can be as raw (and ugly), as needed, as this can (and will) be addressed after our initial tests are passing. If we are writing tests against code that does not yet exist, then we will first write the implementation code (the code that is being tested), directly within the test case itself. Once the test passes, we can then refactor the code out from our test and into the SUT (code we are testing). If the code already exists (we are writing new tests against existing code), we still need to understand and consider the implementation of the code itself, and not just simply write tests against it. Reviewing and critiquing existing code is an excellent way of gaining a quick understanding of a given system. Seize initial opportunities. Start off slow.

Clean Pass

Once we’ve written our initial tests and they are passing, we can then safely go back into our new or existing implementation code and refactor it to our hearts content. If we break something, our tests will let us know. After all, one of the most rewarding aspect afforded by unit testing is the ability to refactor our code freely with little worry or concern that we will unintentionally break something without knowing. If something breaks, are tests will inform us. Tomorrow never comes in Software Development. Clean up as you go along.

Negative Tests

The most obvious tests to write are those which are against the things we are expecting the code to do. But what about if the code is used incorrectly? What if an argument is required and it is not provided, or it is of an invalid type? Does our code throw an exception? Does it simply return undefined? What should it do? These are all questions we should be asking ourselves once our expected test cases are passing. After that, we need to start thinking about ways to have our code appropriately respond to negative cases – we don’t want the entire app to become in an unpredictable state just because an uncaught exception was thrown due to some simple string formatting argument not being passed, etc.. Test the exceptional; Test the unexpected.

Stateless Tests

One of the most important considerations to make both during and especially after all of the above points have been considered, is the statelessness of the system while being tested. Always ask yourself, “Am I resetting the state of all my test’s dependencies back to an expected state?”. This is perhaps one of the most commonly overlooked, yet crucially important consideration to make. A good example illustrating why this is important can be found in the common scenario of a test that invokes a method which triggers an event. If any previously executed tests which handle the event have not been properly tore down (e.g. afterEach), the object will still exist; and thus handle the event. This typically results in a change in state, more often than not causing an unexpected error to be thrown. Always use set-ups (e.g. beforeEach) to configure your tests environment, fixtures, any dependencies your test requires to operate properly. If you are setting values on anything outside the context of your tests; always use mocks, stubs and tear-down methods (e.g. afterEach) to reset them back to an expected state. Remember, while your tests are not part of your applications source, they are certainly part of your projects source; this, in effect, requires them to be viewed as first class citizens; subject to the same quality design and implementation as project source. Tests will need to evolve and be continually maintained. Treat the test environment with respect; ensure you return it in a predictable state. Leave it the way you found it.

Continued Improvement

While the above description of Stateless Tests clearly states that the test environment should remain stateless, and thus “remain as we found it” prior to our tests, our actual implementations code should always be improved when improvements can be made; hence, The Broken Windows Theory is one we should all strive to live by. This especially holds true in the context of writing tests/specs against existing code. If the code is not up to par in any way – fix it. Ask yourself: “How easy was it for me to understand what this code does?”. “Is it documented in a meaningful way?”. “Would it be easier to understand if I added some quick examples?” (Often, adding examples is simple a matter of pointing to, or annotating the source with the test cases themselves). We can have the greatest, most elegant framework and foundation on which to build the greatest apps in the world, but if we allow ourselves to let our code quality degrade, our apps will gradually decay into chaos. Set a higher standard, and live by it. Leave the source better than you found it.

Meaningful Tests

It is quite easy to get caught up in the perceived quality of a system’s tests simply by measuring it against general Code Coverage metrics. This is a subject I have spoken to at length many times. While code coverage certainly has it’s purpose, and can be helpful, it is often not very reflective of reality. Judge your tests not by the number of test cases or units tested, but rather, judge based on the meaningfulness of each specific test case itself. Ask yourself “What is the overall value of this test?”, “Am I testing the obvious?” (such as a simple getter/setter). Focus on what’s important, test whats of most value first. This will afford one the satisfaction of knowing that if time constraints or something comes up which requires shifting focus to something else, the most important test cases are covered. Focus on what’s important.

Know when you are done

It is quite possible for one to go on refactoring beyond what is essential. As such, it’s important to know when you’re done. Some questions to ask yourself are: “Does the code do what it needs to do?”, “Is the code clean and understandable, performant, efficient, etc.?”. “Does it have adequate coverage?” If these questions can be answered in the affirmative, then you’re most likely done. Many times, it’s tempting to continually refactor; as the more one refactors, the more opportunities for further abstractions begin to arise. When confident that your most important objectives have been met, you’re done. No when to stop.

Concluding Thoughts

It is important to note that the above considerations are by no means exhaustive – and this is intentionally so; as each point is specifically intended to provide just enough guidance to sufficiently ask the right questions, and thus solve problems in a pragmatic manner.

Over the years, I have found that it can be particularly helpful for developers new to a specific domain, or new to TDD/BDD in general, to consider the steps listed above from time to time in a general, summarized form. After doing this regularly, it becomes second nature; engrained in one’s daily development process.

  1. Understand the Problem
  2. Start off slow
  3. Clean up as you go along
  4. Test the unexpected
  5. Leave the test environment the way you found it
  6. Leave the source better than you found it
  7. Focus on what’s important.
  8. No when to stop

Quick Tip: Backbone Collection Validation

Often times I find the native Backbone Collection implementation to be lacking when compared to it’s Backbone.Model counterpart. In particular, Collections generally lack in terms of direct integration with a backend persistence layer, as well as the ability to validate models within the context of the collection as a whole.

Fortunately, such short comings can easily be circumvented due to the extensibility of Backbone’s design as a generalized framework. In fact, throughout my experience utilizing Backbone, I can assert that there has yet to be a problem I have come across which I was unable to easily solve by leveraging one of the many Backbone extensions, or, more often than not, by simply overriding Backbone’s default implementation of a given API.

Validating Collections

Perhaps a common use-case for validating a collection of Models can be found when implementing editors which allow for adding multiple entries of a given form section (implemented as separate Views), whereby each section has a one-to-one correlation with an individual model. Rather than invoke validation on models from each individual view, and manage which model’s are in an invalid state from the context of a composite view, it can be quite useful to simply validate the collection from the composite view which, in turn, results in all models being validated and their associated views updating accordingly.

Assuming live validation is not being utilized, validation is likely to occur when the user submits the form. As such, it becomes necessary to validate each model after their views have updated them as a result of the form being submitted. This can be achieved quite easily by implementing an isValid method on the collection which simply invokes isValid on each model within the collection (or optionally, against specific models within the collection). A basic isValid implementation for a Collection is as follows:

As can be seen in the above example, the Collection’s isValid method simply invokes isValid on it’s models. This causes each model to be re-validated which, in turn, results in any invalid models triggering their corresponding invalidation events, allowing for views to automatically display validation indicators, messages, and the like; particularly when leveraging the Backbone.Validation Plugin.

This example serves well to demonstrate that, while Backbone may not provide everything one could ever ask for “out of the box”, it does provide a design which affords developers the ability to quickly, easily, and effectively extend the native framework as needed.

Fluent APIs and Method Chaining

Of the vast catalog of Design Patterns available at our disposal, often times I find it is the simpler, less prominent patterns which are used quite frequently, yet recieve much less recognition; a good example of which being the Method Chaining Pattern.

Method Chaining

The Method Chaining Pattern, as I have come to appreciate it over the years, represents a means of facilitating expressiveness and fluency when used articulately, and mere convenience in it’s less sophisticated use-cases.

Design Considerations

When considering Method Chaining, one should take heed not to simply use the pattern as merely syntactic sugar from which writing fewer lines of code can be achieved; but rather, Method Chaining should be used, perhaps more appropriately, as a means of implementing Fluent APIs which, in turn, allow for writing more concise expressions. By design, such expressions can be written, and thus read, in much the same way as natural language, though they need not be the same from a truly lexical perspective.

The resulting terseness afforded by Method Chaining, while convenient, is in most cases not in-of-itself a reason alone for leveraging the pattern.

Implementation

Method Chaining, when considered purely from an implementation perspective, is perhaps the simplest of all design patterns. It’s basic mandate simply prescribes returning a reference to the object on which a method is being called (in most languages, JavaScript in particular, the this pointer).

Consider the following (intentionally contrived) example:

As can be seen, implementing Method Chaining requires nothing more than simply having methods return a reference to this.

API Simplicity

Method Chaining is typically used when breaking from traditional Command Query Seperation (CQS) principles. The most common example being the merging of both getters (Queries) and setters (Commands). I especially like this technique, as, aside from being very easy to implement, it allows for an API to be used in a more contextual manner from the developers perspective as oppossed to that specified by the API designer’s preconceptions of how the API will be used. For example:

As can be seen, the message method serves as both a getter and setter, allowing users of the API to determine how the method should be invoked based on context, as well as affording developers the convenience of needing only to remember a single method name. This technique is used quite heavily in many JavaScript libraries and has undoubtedly contributed to their success.

We could further expand on this concept by determining a method’s invocation context based on the arguments provided, or the types of specific arguments, thus, in turn, merging various similar methods based on a particular context.

An important design recommendation to consider is that if you are writing an API which violates CQS (which is quite fine IMHO), as always, API consistency is important, thus all getters and setters should be implemented in the same manner.

Fluency

As was mentioned, in most cases, Method Chaining is leveraged to facilitate APIs which are intended to be used fluently (e.g. an Internal DSL). Such implementations typically provide methods which, by themselves, may have little meaning; however, when combined, allow for writing expressions which are self-descibing and make logical sense to users of the API.

For example, consider the way one might describe a Calendrical Event:

Vacation, begins June 21st, ends July 5th, recurs Yearly.

We can easily implement a Fluent API such that the above grammar can be emulated in code as follows:

The same methods can also be chained in different combinations, yet yield the same value:

Given the above example, we could further improve on the fluency of the implementation by adding intermediate methods which can, by themselves, simply serve to aid in readability, or, provide an alternate modifier for chaining:

When implementing Fluent APIs, we can design such that different logical chaining combinations can yield the same result, thus affording users of the API the convenience of determining the most appropriate expressions based on context or personal preference, even grammatically so. Illogical chaining combinations can be handled by either throwing an exception, or they can simply be ignored based on the context of a preceding invocation – though, of course, one should aim to avoid designs which allow for illogical chaining.

The Ubiquitous Example – jQuery

While Method Chaining and Fluent APIs, as with most design patterns, are language agnostic, in the JavaScript world perhaps the most well known implementation is the jQuery API; for example:

In addition to jQuery, there are numerous additional JavaScript Method Chaining and Fluent APIs of note, Jasmine in particular has a very expressive API which aligns excellently with it’s design goals. The various libraries which implement the Promises/A spec also provide very clear and concise Fluent APIs.

Concluding Thoughts

Over the years I have leveraged Method Chaining to facilitate the design of Fluent APIs for various use-cases. The two patterns, when combined, can be especially useful when designing Internal DSLs; either third-party libraries, or APIs specific to a particular business domain.

Pseudo-abstraction in Backbone

As has been mostly disseminated, JavaScript, being a dynamic, prototypal language, affords developers the ability to design outside the rigid confines inherent to statically typed languages. Interestingly, perhaps even somewhat paradoxically, this same flexibility also allows for programmatically simulating specific features commonly found in statically typed languages, if desired.

While JavaScript does not have a traditional type system, nor does it provide traditional constructs by which user defined types are specified, it is still, necessarily so, a common and desirable design goal to implement a system with the notion of classes in order to provide data types which encapsulate domain logic and facilitate reuse; both of which being key design attributes which help mitigate the complexity of large applications.

Nearly all JavaScript MV* frameworks provide such facilities, and do so in a consistent and convenient manner; most of which allowing for practical circumvention of the prototype system almost entirely. It is also worth noting that while most libraries themselves are generally implemented in the succinct and terse, large applications typically call for a more traditional object oriented design, while also being prudent to do so in alignment with the conventions and idioms particular to JavaScript itself.

Abstraction

At times it will be necessary to design a system with reusable abstractions. In fact, it is quite hard to imagine a modern SPA of even marginal complexity as being maintainable without some level of base class functionality.

For instance, it can be particularly useful to implement base Models and Collections which provide general functionality common amongst all Models and Collections; such as the parsing and appropriate routing of service API exceptions to error callbacks, and successful service results to success callbacks, and so forth.

Since such base classes generally do not provide any concrete behaviors themselves (hence the abstraction), they are of considerable value, specifically when reused amongst various large scale, distributed projects; and, from a design perspective, it is often important for one to ensure such classes are only used as intended.

While one can convey the intended usage of a base class easily enough simply by means of comments alone, indicating their usage as such (and that is quite fine if you prefer), it is also just as easy to ensure base classes are only used as intended programmatically by implementing a simple conditional which checks an instance’s constructor against the base class’ constructor function. For example (in the context of backbone, though any framework applies):

Then, one can simply extend the base class, invoking defaults as needed:

Concluding Thoughts

Like many in the JavaScript community, I, too, am of the opinion that JavaScript should not be made to reflect that which is common to other languages simply for the sake of familiarity; but rather, one should be prudent to leverage the flexibility inherent to the language itself, and this example serves as a demonstration of how such flexibility can be utilized to provide what a specific design calls for at the discretion of the developer.