Building a shared codebase across platforms with JavaScript

A photograph of the author: Phillip Whitaker

By: Phillip Whitaker

Building services, websites and apps from scratch can be a long process with potential duplication of effort needed to implement business and display logic across different back-end and front-end platforms.

It’s more important now than it’s ever been to ensure that businesses have the flexibility to adjust their offerings to meet both business and user needs with minimal impact.

JavaScript, as much as people like to point out it’s issues, has one strength in that it’s the perfect language to meet this need for flexibility and as the browser continues to erode more and more of the functional gap with native mobile and desktop apps it’s going to be something that continues to remain relevant.

While an argument could be made that you could replace JavaScript with languages that work on the back-end and can be transpiled into JavaScript for the browser, these tools open teams up to risks around how that transpilation is handled and as there will be a need to test the transpiled code in JavaScript this approach doesn’t really meet the goal of having a truly single language shared codebase.

The benefits of a shared codebase

There are a number of reasons a shared codebase is useful. It cuts down on the potential for duplicated code, but more specifically you can benefit from;

Having one implementation of your core logic

By creating a library for your core business or display (e.g. component library) logic you have one source of truth of how your service or product is going to behave, allowing for a consistent user and developer experience.

This library should remain focused and a clear set of contextual boundaries should be defined to ensure that the library continues to function well and doesn’t end up becoming bloated as more and more logic gets built into it.

Don’t be afraid to split logic across multiple libraries that do one job and do that job really well, as you can use a package manager to bring those libraries together in your downstream projects.

You can create and update applications quickly

If your logic is implemented as a library then your downstream applications become essentially ‘glue’ that sticks a bunch of libraries together with the application specific code being there to allow interaction with the underlying logic.

This means that should you need to update that application specific code you’re free to do so, safe in the knowledge that you’re not going to change the underlying logic.

If that underlying logic requires a change then it’s easier to update multiple downstream applications at once and, with the use of semantic versioning you can ensure that those downstream applications know the impact these new changes are likely to have.

A key constraint to avoid here however is that this agility to adapt code is only really available if you use a flat dependency structure, as bubbling up changes across multiple levels of inheritance will result in less ability to get these changes live.

You can have a standardised developer experience

Being able to standardise which set of tools are used to write, test, report on and deploy code across different libraries and the application of those libraries means that there’s less context switching for developers as they move from project to project.

This standard won’t remain static though and will evolve as developers encounter issues with the set up, but improvements made by one team will be easy to adopt by other teams as it’s just a tweak on the existing tools they’re using.

Having a consistent approach to how a development team works also means those teams are less likely to be producing legacy code as developers from other teams will be able to pick up the project and know how to generate documentation, run tests and make & deploy their changes.

You can mitigate upgrade risks at different levels

If your codebase shares the same runtime it becomes easier to verify if upgrades to new versions of the runtime will impact your application, as you can test the updated runtime version on all libraries and applications that use it and patch any issues at the appropriate level before finding out things aren’t working later on in production.

On the flip side if the upgrade risk is an internally produced one, such as a core library being updated you can make sure your versioning format implies what the changes represent (i.e. semantic versioning) but also, should a bug be found to impact a downstream project, then the developers on that project can easily create a test case to provide the library team in order to aid them in reproducing it. This will speed up resolution time massively.

Some down sides

A shared codebase isn’t the perfect solution. There are a few things to be mindful of when deciding to adopt a shared codebase to ensure it doesn’t end up causing more damage than the value it would bring.

Dependency management overhead

If you’re building multiple libraries around your business and display logic it can feel like there’s a lot of work that would be involved to make a simple change at a lower level, more so than just having that logic in the main application.

This is true, but it’s a little short sighted as when the business pivots and wants to re-use that logic in a different context the time required to deliver something will be smaller and any changes made on the back of one application’s needs will become available to all other applications.

The important thing to remember is not to bring additional overhead on yourself by using an inheritance structure for your libraries. Instead, use composition where possible and if you do need to utilise a library across multiple levels you can do so using peer dependencies instead.

A peer dependency allows for one instance of a library to be installed at the top level but have it be used by all libraries that depend on it, provided that the version being targeted is compatible by all dependent libraries.

Additionally when using multiple libraries make sure you pin your dependencies and push your package manager’s lock files to ensure that the versions of the libraries you have verified to work correctly are pulled in at install time.

You can’t use the best tools for the job

When building a shared codebase it can feel like you’re locking yourself into one particular tech stack and preventing yourself from using the best tools for getting a job done.

This is particularly evident on the more back-end side where it’s common for development teams to use tech stacks that offer them type-safety and performance gains from being lower level than a language like JavaScript.

Ultimately this is a very context dependent issue and you’ll have to make the decision on whether the code re-usability and consistent development experience outweighs these issues.

However there are some approaches that allow for a shared codebase to be used within a more disparate tech stack.

Should you have a need to use a specific set of tools elsewhere you can always create a micro-service wrapper around the parts of the shared codebase that these more tailored services need to interact with.

There will of course be overhead due to the need to communicate across the services but if it allows for the right balance between using the best tool for that job and allowing for a shared codebase for the rest of the stack then it might be a good compromise.

Building a shared JavaScript codebase

JavaScript may feel like a language that ‘just runs anywhere’ but the realities of this are nuanced and there are a few techniques that need to be used to ensure that you don’t get tripped up by these nuances.

Build isomorphic libraries

Not all JavaScript runtime environments are created equal, and this goes further than older browsers not supporting some of the newer fanciness.

An example of this would be how to handle reading a file across a browser runtime, NodeJS and React Native.

In the browser you use receive a File object from a file upload input in a form which then needs to be read using a FileReader but in NodeJS you can just use the fs library to read. In React Native you end up using something like react-native-fs or expo-file-system to read the file but there’s a caveat.

While the browser and NodeJS have built in base64 functionality, React Native does not, which means you need to use library or build your own implementation.

Where possible you should look to avoid exposing these differences to the consumer of your library and instead give them one interface to call that then handles the complexities inside of it.

A good technique for this is to use something akin to a strategy pattern with different implementations for the differing behaviour and one method that decides which implementation to call, based on the runtime environment.

Break functionality into different libraries

Don’t be afraid to create as many libraries as you need to ensure you have a clean separation of concerns for your libraries.

It’s easy to fall into the trap of adding more and more responsibility into a library, justifying it as being at the appropriate level, but you shouldn’t be looking at the ‘level’ and instead be looking at what the library is looking to achieve.

Your downstream application can bring as many libraries into it as needed and if you use peer dependencies to manage some of the more commonly imported libraries you can take advantage of composition instead of being tied down by some of the inflexibility of inheritance.

When relying on these upstream libraries it’s good practice to write a set of integration tests that cover the functionality you’re getting from them, this gives you a means to ensure that the upstream library will work as intended.

Use Semantic Versioning and pin your dependencies

If you’ve used JavaScript for a while you’ve probably seen the impact of poorly managed versioning, mostly due to the way that npm doesn’t pin dependencies automatically, instead allowing for new minor and patch versions to be pulled, but not new major versions.

This of course means there’s a reliance on upstream libraries to adhere to semantic versioning so that breaking changes aren’t introduced within a minor or patch version and this is normally where things fall over.

In order to counteract this you can pin your dependencies. That means to remove the ^ and ~ prefixes on the versions in your package.json and instead just pull in the versions that you know work.

Back this up with committing and using your package-lock.json file and creating deployment artefacts and you’ve mitigated most of the risk associated with the uncertainty caused by npm’s default approach.

However you shouldn’t just rely on these and then get sloppy with your versioning, instead, look to use Semantic Versioning to allow those looking at the version number to instantly understand the impact of upgrading to a newer version.

In Semantic Versioning changes to the major, minor and patch value (e.g. formatted as major.minor.patch ) indicate the type of changes that have been made:

  • Major — This means the new version has breaking changes and you should make sure that your code works fully with this new version
  • Minor — This means the new version has new functionality but that functionality is backwards compatible so you should be able to use without things breaking
  • Patch — This means the new version has some bug fixes and improvements but these are backwards compatible so you should be fine

Write tests and documentation

Not only is it a good development practice to write tests but when using libraries it’s important to write different types of tests as the encapsulation that unit tests run within (if done properly) means it won’t exercise the library code and thus you won’t catch any bugs brought on by changes to that library.

As you start to rely on more and more functionality from one of your upstream libraries add a set of integration tests (meaning tests that don’t use test doubles for lower level libraries) around the functionality you’re using and make sure that these are maintained and run in CI.

This set of integration tests will be valuable in allowing you to verify the impact of upgrading to a newer version of the library, as you can update the newer version, run the integration tests and if there’s any failures then you know that you need to invest time in fixing the compatibility with that library before it can go live. This will be far easier than having that version installed at deployment and finding out later something was broken.

Depending on your CI tooling you can even automate this by using webhooks fired from your package repository on the publication of a new version to install the new version and run your integration test suite, or you can use external tools to manage this for you.

A good test suite will also serve a purpose as documentation but this doesn’t mean you shouldn’t write documentation anyway, as it might not always be developers who are looking to understand what your library does.

A number of JavaScript documentation libraries exist but jsdoc seems to be the most commonly used for turning code comments into a readable website and has support for plugins such as for markdown support and jsdoc-mermaid which allows for graphs to be used to illustrate concepts.