Software dependency management - a primer
"Software dependency management" just sounds so glam, doesn't it? If we rephrase it to say "do less, get more" it starts to sparkle a little. Let's take a look at this essential topic and some rules that Go Tripod follow in our projects.
A "dependency" refers to a unit of code that the rest of your code relies on. The most common way of incorporating it is to use the package manager associated with your language of choice. And so; in Ruby you might add a gem to your project, Rust uses crates, JavaScript has npm. Easy peasy.
Except it isn't. It's telling that searching for the title of this post on Google turns up the Wikipedia page for "dependency hell". Adding dependencies is one thing but we're talking software dependency management here, and that involves not only keeping them up to date, but making sure you actually need those dependencies you're adding.
Rule 1 of software dependency management: do you need it?
Let's talk about that last point first. Here we go, we're adding a dependency:
npm install stringconcat
In our theoretical JavaScript project, we're running a command to add a dependency called "stringconcat" so that we can join strings together. But do we need that? In JavaScript, we certainly don't:
const joinedString = 'my' + 'string';
This is a contrived example, but back in 2016 a very simply library called left-pad was removed from NPM and caused a great deal of consternation.
So before you run that command willy-nilly, think: do you need it? Or can you just write it yourself?
Rule 2: is it any good?
Before adding the dependency, you need to make sure if it's of sufficient quality. This doesn't just mean that it has the features you need, or that you need to do an in-depth source code analysis, but you do need to perform some due diligence:
How popular is your dependency? Are there more popular alternatives?
Popularity doesn't guarantee quality, but it's a good indicator that the community relies on it. That should mean that people are willing to maintain it.
How many maintainers does it have?
Is there only one person working on this code? What happens if they get really busy?
Are there lots of open issues or pull requests?
If the code's on GitHub, take a look at the issues and pull requests. Does it seem like these are neglected, or is the project team responsive and present?
There's no hard and fast rule for assessing the health of a project, but thinking along these lines should allow you to get a feel for whether code's going to be well maintained.
Rule 3: do you still need it?
We might find that our code evolves. We might find that platforms evolve. In those cases we need to keep on top of our dependencies to make sure that we're not bringing anything in unnecessarily. Particularly in cases where native implementations become available, we should use that to get a speed boost for free.
Rule 4 of software dependency management: keep up to date!
Rules 1 & 2 are mere preludes to the killer pain of dependency management: updates. Let's look at an example using Ruby on Rails 6, which introduced Action Cable to integrate WebSockets into your application. You try and upgrade Rails as follows:
bundle update rails
And see the dreaded:
Bundler could not find compatible versions for gem "actionpack"
<INSERT ENDLESS MESSAGES>
This is the reality of dependency hell. A gem relies on actionpack 5.x, so when you try and upgrade actionpack to 6.x you can't, because of that one gem. So, you update that gem, but you can't, because either it hasn't been updated for Rails 6 or it in turn depends on another gem which is tied to Rails 5. Repeat this all the way down the dependency tree or until you have removed all of your hair and nails, whatever comes first.
There is no foolproof way around this, but the easiest way is to follow the other rules: keep your dependencies limited to ones that you really need and ones which are well maintained. Secondly, don't pin to some random broken git revision, and don't use your own forks of code. You're asking for trouble. Thirdly, make updating your dependencies routine. Do it regularly and you'll avoid having to wade through dependency hell and migration instructions in changelogs. Not only will your hair and nails thank you, but your project will too.
Evolving our technologies and learning how to learn
At Go Tripod, we've been in the business of software development for a long time. Although Go Tripod was officially formed in 2009, we've got blog posts going back to 2007 which describe the technologies we were interested in at the time. My personal blog goes back to 2003! Here's a quick list of the things we've been into during that time:
- ASP.NET (with MVC using Castle Project or SubSonic)
- PHP & WordPress
- Obj-C
- Adobe Flex & Flash
- Ruby & Ruby on Rails
- JavaScript (React, React Native, Ext JS & Sencha Mobile)
That's a whole host of disparate technologies. Can't we just settle on one thing?
Backend Dealings
ASP.NET is still a thing today, just as it was when we wrote about it in 2007. But in 2007, the web was changing rapidly. ASP.NET was clunky, relying on something called ViewState, a giant blob of data that got transferred on every page request. While it enabled rapid application development, the result wasn't always maintainable or conducive to quick loading. I was an early fan of Ruby on Rails and its organised approach to development (known as MVC), and it seemed like a breath of fresh air compared to the complexity of ASP.NET. We still use Ruby on Rails for our backend systems today, 14 years after we first started researching it.
On the way to 2019 though, we continued evaluating new software. Microsoft got on the bandwagon with the mostly-excellent ASP.NET MVC framework, but the Rails community was cranking out loads of great libraries to support an excellent foundation. PHP did the same, with frameworks like CakePHP but, at the time it was released, PHP had a reputation for being a bit of a mess.
Front & Centre
The same applies on the frontend, though to a lesser extent. JavaScript's always been the lingua franca of the web, powering rich interactions in the browser. But to get round some of its quirkiness, we had PrototypeJS in 2005, MooTools in 2006, and jQuery in 2007.
jQuery inspired me to create Fizzler, a selector engine for C# in 2008. It was improved and redesigned by Atif Aziz, and he still maintains it today.
A big one for me was the release of Ext JS in 2007/2008, and I went on to author and co-author several books on this rich framework.
In 2013, ReactJS was released and was as much of a game-changer for frontend development as Ruby on Rails had been for backend. We jumped on it, and its sibling for apps, React Native, to let us build complex user interfaces in a way that was maintainable and testable.
Why tho?
What makes us continually search for a better mousetrap? Looking at the bigger picture gives a clear reason: the difference in developer experience between jQuery in 2007 and React in 2019 is huge. A better developer experience means that we can provide an end product that is better tested and is more usable. The gap between ASP.NET and Ruby on Rails is a bit more muddled; we could use ASP.NET MVC now, which is a much better product. But Ruby on Rails wins out, because it's simpler to use and has a huge lead in community-provided libraries to help us turn out top-notch projects.
But...
There's an outlier here. Something that has remained constant, like a comforting blanket. WordPress was released in 2003 and remains incredibly popular, powering over half the world's websites. It's not perfect, but it's developed in the open and has a massive support community. We've used it for years and hope to keep on doing so for years to come. The recent release of Gutenburg gives us the chance to let users build amazing page layouts with ease, and using partners such as WPEngine means we can keep things secure and backed up without any worries.
Conclusion
The world of software moves quickly, but the goals remain the same: make great experiences for our clients and their customers. The technology should fade away into the background, but we know it's there, quietly powering and empowering.
Migrate from GitLab to GitHub
Just interested in the code? Cut to the chase.
GitHub and GitLab are competing services providing source control products based on Git. Source control enables us to keep track of all of the code changes we make to a project so that we can roll them back in the event of a problem. Seeing the history of a project can also help understand why decisions were made, years after they happened.
GitHub have just announced that they're providing free private repositories for individuals. This is great news for the community at large, but since we run an "Organisation" account on GitHub, it didn't directly affect us. However, it did give us a chance to re-evaluate how we host our source code. We previously self-hosted GitLabCE, which is a fine piece of software. However, it had two drawbacks:
1. We're hosting another server, which requires updating and auditing.
2. There are some third-party services which only interact with GitHub and not GitLab
We decided to close our GitLab server and move to GitHub, and the first step of that is moving our repositories. We had 132 hosted on that server, so moving them one-by-one simply wasn't practical. Fortunately, both GitLab and GitHub have APIs which we could use to migrate automatically.
We're going to use a Ruby script to call the APIs. The gitlab and octokit gems wrap the GitLab and GitHub APIs respectively, so once we've installed those we need to configure them:
Gitlab.configure do |config|
config.endpoint = GL_ENDPOINT
config.private_token = GL_PRIVATE_TOKEN
end
gh_client = Octokit::Client.new(:access_token => GH_PRIVATE_TOKEN)
Having done so we can fetch all of our GitLab projects:
gl_projects = Gitlab.projects.auto_paginate
Auto-pagination means we can grab all of them at once. Now, we need to do something with each project:
gl_projects.each do |gl_project|
# do something in here
end
Firstly, we specify the destination repository in GitHub:
destination_repo = "#{GH_ORG_NAME}/#{gl_project.name}"
GH_ORG_NAME is a variable containing the organisation which will contain this repository, in our case "gotripod". You'll see where to specify this in the full source code later.
Before we can grab the repo from GitLab, we need to make sure we're a member of its project. That's straightforward:
begin Gitlab.add_team_member(gl_project.id, 4, 40)
puts "You've been successfully added as a maintainer of this project on GitLab."
rescue Gitlab::Error::Conflict => e
puts "You are already a member of this project on GitLab."
end
And we also need to create the destination repo on GitHub:
begin gh_client.create_repository(gl_project.name, organization: GH_ORG_NAME, private: true)
puts "New repo created on GitHub."
rescue Octokit::UnprocessableEntity => e
# If error everything else could fail, unless the error was that the repo already existed
puts "Error creating repository on GitHub: #{e.message}"
end
Finally, we can trigger the import!
gh_client.start_source_import(
destination_repo,
gl_authed_uri(gl_project),
vcs: "git",
accept: Octokit::Preview::PREVIEW_TYPES[:source_imports]
)
gl_authed_uri is a method defined as:
def gl_authed_uri(gl_project)
gl_repo_uri = URI.parse(gl_project.http_url_to_repo)
"http://oauth2:#{GL_PRIVATE_TOKEN}@#{GL_SERVER}#{gl_repo_uri.path}"
end
If we tried to import from the GitLab repository URL without doing this, it would throw an error telling us we don't have authorisation to access it. With this, method, we're using our private GitLab token to gain access.
For extra bonus points, we can get a report of the progress of each import using Octopoller:
Octopoller.poll(timeout: 15000) do
result = gh_client.source_import_progress(destination_repo, accept: Octokit::Preview::PREVIEW_TYPES[:source_imports])
print "r#{result.status_text}"
if result.status_text == "Done"
nil
else
:re_poll
end
end
This keeps checking the import until it returns a status of "Done", at which point the import is complete! This makes the migration process a lot slower though, because it'll wait until GitHub confirms that it's done its work. If you comment out this block, then you can speed through all of the repos and let GitHub do the work in the background. It'll email you when an import has completed.
We've made the full source code of this basic GitLab to GitHub migrator available on GitHub (unsurprisingly!). There's some additional configuration code and a simple feature to allow resuming interrupted migrations. We don't provide support, but you're welcome to open issues and pull requests to collaborate on improving it. There are certainly a number of improvements that could be made (for example allowing an option to disable the progress report without having to comment it out), but we'll leave that for another post.
Grab our GitLab to GitHub migrator now!
How we host
As part of our drive to double-down on software and provide a great experience for our customers, we've been reviewing our hosting infrastructure. We recently talked about how our WordPress clients enjoy security and redundancy, and we want to continue to provide that same great experience for the various web apps that we host. In this post, we'll look at the three providers we've chosen to partner with in order to make this happen.
Heroku
Heroku is known as a "Platform as a Service" or PAAS, which means that it provides infrastructure to run an application while abstracting things like operating system updates and configuration of the underlying system components. It integrates with source control to allow us to deploy versioned code (and roll it back in case of issues!), and the dyno model lets us easily deal with high-traffic scenarios. Because Heroku is a cloud-based system, there's no single point of failure, which means that if something does go wrong, chances are Heroku will automatically switch things behind the scenes so visitors never see an issue.
Heroku is built on Amazon's AWS platform and, as such, it complies with accreditations such as PCI Level 1 and ISO 27001. This, on top of the huge number of Heroku addons and the excellent administration tooling, means that Heroku is our primary choice when we need to host Ruby on Rails or PHP applications.
Microsoft Azure
We built a bespoke ASP.NET content management system for one of our clients, and since Heroku doesn't formally support ASP.NET, we needed to look elsewhere. Azure is Microsoft's answer to Amazon's AWS and their Web Apps provide a similar deployment model to that provided by Heroku. We can leverage Azure Storage to store uploaded documents in redundant fashion, and use Azure's SQL Server support to deliver scalable and highly available databases. While Heroku is perhaps easier to use, Azure is more flexible and lets us work at a lower-level, and as such we'll be moving our client's infrastructure to Azure in 2019.
Netlify
For simpler sites which require little-to-no dynamic content, we turn to Netlify. It offers the same source control-based deployment model as Heroku and Azure, but it doesn't support projects that are powered by a programming language or framework like the previously mentioned Rails, PHP or ASP.NET. Instead, its strength lies in making static sites as good as they possibly can be, with free TLS/SSL, caching support, and a redundant delivery network. If needs be, we can augment the static site with some simple dynamic features such as forms, login & registration, and custom functions.
Host with Go Tripod
You may notice that when we talk about these providers, we're reusing certain key terms such as versioning and redundancy. That's because these are features that we think should be included as standard by any responsible hosting provider. By setting ourselves up for success, we can focus more on building software and less on maintaining infrastructure. If you're looking for peace of mind when hosting your business-critical applications then get in touch.
A caring software community
We've recently been extending our reach into the local area by approaching organisations that help connect businesses together. The first of these is Falmouth's Business Improvement Distract (BID), who help boost the town by investing in marketing, business support, events and a variety of improvement mechanisms. The second is Software Cornwall, who connect and support Cornwall’s digital techology community. These are the first two in a series of partnerships we're entering into in order to better understand the needs of the local community.
WOFF font rendering issue on Windows, fixed
We recently posted about the font loading solution we use on our websites. Further to implementing this technique we noticed that, on this very site, some of the fonts weren't quite rendering correctly.
Look at the examples below. The font rendering issues were slight but there was a definite problem with uneven character weighting in the Google fonts we were using (Noto Sans and Cabin).
It's particularly noticeable on the crossbars of the capital Hs but you can also see that some of the ascenders are too bold. Capital Ss also show problems where their terminals on the cap line are bolder than their finials on the base line. There is a similar issue with the bold lower case C. Finally, the apices of the lower case Ts aren't really rendering at all!
(Also note that the rendering issues are only obvious in the first paragraph of text and in the bold title. The second paragraph of text appears to be rendering correctly.)

Tying down what was causing this proved tricky for the following reasons:
- The problems are only apparent on Windows machines
- The problems are more apparent in IE than Chrome
- The problems are only apparent with certain font families
- The problems are only apparent at certain font size/weight combinations
So, to get to the bottom of this we needed to unpick our font loading solution... We use Font Squirrel's Webfont Generator to generate compressed WOFF and WOFF2 files from the original TTFs.
The generator offers three modes: Basic, Optimal, Expert. We'd entrusted it to 'Optimal' and it transpires that this was causing our problems.
Font Squirrel's 'Expert' mode allows you to adjust a whole host of settings. (See below for a subset of these.)

It turns out that the culprit in this instance is Font Squirrel's Truetype Hinting algorithm. Font hinting is a complicated matter, the nuances of which are beyond the scope of this post, however, suffice to say that automating the hinting process via the default 'Font Squirrel' setting proved problematic.
When we set Truetype Hinting to 'Keep Existing' or 'TTFAutohint' our problems vanished.
It's worth noting that we've not seen this issue on any other projects so this setting must only affect certain fonts negatively. However, as a bonus, this change also involves a file size saving for the variants of Noto Sans our website uses. See the comparison below:

As such we've now added a config file which includes this change ("tt_instructor": "keep") to our boilerplate repo.
Moral of the story - leave font hinting to the font designer!
The best Gulp build tool for WordPress
As developers, we have a seemingly endless supply of tools at our disposal thanks to the hard work and generosity of the wider development community who, by and large, share their hard work in the hope of making their peers' lives that little bit easier.
At Go Tripod we're always on the lookout for new ways to streamline our processes and improve the quality of our work. Task runners are a great example of this.
Utilising JavaScript, programs like Gulp and Grunt can automate all kinds of labour-intensive tasks including, for example:
- compiling, concatenating and compressing assets (eg. images, CSS, JS)
- live browser reloading for speedy development and testing
- auto-prefixing for optimal browser support
With these jobs running automatically in the background we're able to focus entirely on the task at hand. This increases productivity and makes sure the code we're writing is as performant and compatible as it can be.
Our current build tool of choice is a modified version of the excellent WPGulp by WordPress aficionado, Ahmad Awais. It does most of the things we need out of the box but since we use Timber to separate logic from styling I've amended it to watch .twig files as well as .php files.

Along with a few other minor adjustments, the result is a fast and feature-packed workflow which allows us to build WordPress sites of the highest quality in record time. ????
Using chatbot software to improve transparency - introducing Yus
Ten years ago, just before Go Tripod was formed, I was an independent contractor with a strong eye on my bottom line, so I created a way to track my income at-a-glance. This was extremely useful at the time, as it helped a young Colin make sure he had enough pennies at the end of each month, and we now use a much more advanced version at GT to make sure we're bringing in some good dollar every year.
The business challenge
We'd like to be able to quickly get information on our current financial performance, including monthly tallies and current debtors and creditors. This data should be available in a frictionless manner - without needing to log in to accounting software, crunch numbers or run reports.
The solution (In a nutshell)
In my decade-old solution, I was using Freshbooks for accounting. At GT we now use Xero, a cloud-based accounting program that makes the whole process of invoicing, filing returns and tracking payments much easier. We worked out a way to pull our financial information out of Xero and have it posted into Slack, the chat service that our team uses each day. We wrote a chatbot, called Yus, which we can use to request vital information about how well Go Tripod is performing. Yus will also give us a breakdown of bills that we need to pay and invoices that are outstanding.
The solution (The software side)
We built Yus using a combination of Botkit and xero-node. It runs on Heroku. Here's a sample of what it can do:

We can send Yus simple phrases and it'll respond accordingly. This time round it's decided to exaggerate somewhat ????

In this case we get a list of clients who we might otherwise not have chased for payment until our bookkeeper flagged this up. Now, Yus can help us liaise with our clients to make sure invoices don't slip through the cracks.
Conclusion
Transparency is key. Low friction transparency is even better. By using chatbot software we can listen to the heartbeat of Go Tripod and take immediate steps to perform course corrections. Fancy building some clever technology to improve your business? Get in touch.
A little word about Yus
The name "Yus" comes from two letters in the Cyrillic script: Little yus (Ѧ ѧ) and big yus (Ѫ ѫ). We use this character to represent a tiny tripod in our email footers.
