The team we were working with had a utility function that had not yet been given type expectations due to its complex return type. This function could return a different type based on its input type. TypeScript provides a feature for solving this kind of complex return type called conditional types. Learning how to define a conditional return type allowed us to inform TypeScript about the condition, resulting in better type awareness in all code that called the function.
Our client’s TypeScript codebase had a utility function for user permissions called getPermissions
. This function had been written to support passing the name of a single permission or multiple permissions. In response the function would return a single boolean
or an array of boolean
s. Returning an array allowed developers to destructure the result in the order the permissions were passed to the function.
// getPermissions could be passed a single permission
const hasReadPermission = getPermissions("user:read");
// or multiple permissions
const [hasRead, hasWrite] = getPermissions(["user:read", "user:write"]);
The team had made the decision to turn on TypeScript’s noImplicitAny
rule (a great TypeScript discussion for another time). This meant the getPermissions
function needed to be given proper type expectations.
getPermissions
relied on fetching server data and utilized a third-party library. I’d like to strip those details away to get a closer look at the TypeScript story. A very simplified example of this function for getting user permissions might look like the following code:
const grantedPermissions = [/* assume there are some permissions here */];
function getPermissions(permissionKey: string | string[]) {
if (Array.isArray(permissionKey)) {
return permissionKey.map((key) => grantedPermissions.includes(key));
}
return grantedPermissions.includes(permissionKey);
}
The only explicit TypeScript type annotation here is that the function must accept one parameter that is either a string
or string[]
(array of strings). TypeScript is able to infer the return type based on the values we return: boolean | boolean[]
. Note that boolean | boolean[]
is a union type and will be the type given to any variable that is assigned the result of getPermissions
.
The union return type creates a decision tree that every caller of the function is forced to resolve. Each time getPermissions
is called, logic will be needed to decide if the result is a single boolean
or a boolean[]
. Even though you, the developer, can understand that by passing an array, the function should return an array, the type system does not have enough information.
// the result will need to be casted
const hasPermissions = getPermissions(["secret:read", "user:write"]);
const [perm1, perm2] = hasPermissions as boolean[];
// or an if check is needed to verify the type before treating as an array
if (Array.isArray(hasPermissions)) {
const [perm1, perm2] = hasPermissions;
}
This is a utility function and, in the real-world example, it is called dozens of times throughout the codebase with the expectation that it will be used again frequently. A good thing for us as developers to prioritize here is to make the function as easy-to-use as possible.
To solve this, you don’t necessarily need to know about deeper TypeScript features like conditional types. One possible solution simply involves creating two functions: one for a single permission check and another for checking multiple permissions. This could be a perfectly viable solution.
function getPermission(permissionKey: string) {
return grantedPermissions.includes(permissionKey);
}
function getPermissions(permissionKeys: string[]): boolean[] {
return permissionKeys.map((key) => grantedPermissions.includes(key));
}
This clears up the need for the caller code to make a decision about the return type. However, it leaves the developer of the calling code with the responsibility of knowing about the two different functions. In our case we decided there was value in having a single function that could handle both cases. A quick look through TypeScript documentation led us to a great solution.
TypeScript provides many useful tools for refining type expectations and constraints. One that seemed perfectly fit for this situation is called conditional types. TypeScript conditional types can be used in combination with TypeScript generics to define a type that depends on a condition. In this case this allowed us to configure a return type that changes based on the input type.
This is an example of what the function definition could look like with conditional types:
function getPermissions(
permissionKey: KeyType
): KeyType extends string ? boolean : boolean[] {/* ... */}
In this example the generic type KeyType
captures the type of the permissionKey
parameter when then function is called. The return type is a conditional type defined as boolean
if the KeyType
is a string
and a boolean[]
if the KeyType
is a string[]
.
I sometimes find that giving types a name can make their purpose more clear. The following code is the same function definition with named types.
type PermissionsKey = string | string[];
type PermissionsResponse = Key extends string ? boolean : boolean[];
function getPermissions(permissionKey: Key): PermissionsResponse {/* ... */}
Using conditional types in this function improves the type awareness of all calling code. Here’s a complete example of the function to help to make the benefits clear.
type PermissionsKey = string | string[];
type PermissionsResponse = Key extends string ? boolean : boolean[];
const grantedPermissions = [/* assume there are some permissions here */];
function getPermissions(
permissionKey: Key
): PermissionsResponse {
if (Array.isArray(permissionKey)) {
return permissionKey.map((key) => grantedPermissions.includes(key)) as PermissionsResponse;
}
return grantedPermissions.includes(permissionKey) as PermissionsResponse;
}
Callers of this function will be able to infer the return type based on the permissionKey
parameter they pass.
/* Calling hasPermissions with a string */
const hasReadPermission = getPermissions("user:read");
// The editor and compiler know hasReadPermission is type boolean
/* Calling hasPermissions with a string array */
const hasPermissions = getPermissions(["secret:read", "user:write"]);
// The editor and compiler know hasPermission is type boolean[]
// and will consider the following statements valid
hasPermissions.length
hasPermissions[0]
const [hasRead, hasWrite] = getPermissions(["secret:read", "user:write"]);
This highlights one of the things I find to be super useful when using TypeScript. My productivity is boosted when my editor is able to give quick feedback regarding the types of my variables and functions.
This simplified example revealed a TypeScript limitation we did not encounter in the real function due to some additional abstraction in the original code. Ideally I would like to show an example that does not require casting the return types with as
.
// IDEAL EXAMPLE; BUT DOES NOT COMPILE
const grantedPermissions = [/* assume there are some permissions here */];
function getPermissions(
permissionKey: Key
): PermissionsResponse {
if (Array.isArray(permissionKey)) {
return permissionKey.map((key) => grantedPermissions.includes(key));
}
return grantedPermissions.includes(permissionKey);
}
Unfortunately, in this case the Array.isArray
check is not sufficient to narrow TypeScript’s understanding of the return type. If we discover a better solution in the future we will update this post.
Ultimately using a conditional type definition gave our team the benefits of accurate type expectations and a function that was easy to re-use.
Have questions, or want to talk about this post with other developers? Join the conversation in the N.E.A.T. community.
]]>Another big part of Ruby’s shine: the rich ecosystem of gems and tools surrounding it.
Over the years, I have accumulated quite the toolbox when it comes to working with Ruby. Here are some of my personal favorites tools and gems. (I tried not to focus too much on Rails, but obviously Rails occupies a significant space in the Ruby world, so it’s hard to avoid.)
A tight feedback loop is a game changer. Shortening your feedback loop will always pay dividends. Here are a couple of tools I use to shorten mine.
I highly recommend watching this talk from one of Test Double’s founders, Justin Searls — especially the section starting at 31:52 where he does the math on the importance of feedback loop. If it doesn’t make you obsessed with its importance, nothing will!
An oldie but a goodie: Letter Opener.
Do you have some flows that are dependent on email? Of course, you do! Think password resets, for example. Your options are:
Or you could just have the email automatically open in the browser!
The letter_opener
gem, from Ryan Bates (the creator of RailsCast), does just that.
If you still want an interface/list like what you get with Mailhog/Mailcatcher, you can get one through letter_opener_web
.
I feel like I often take live reload for granted nowadays, especially when working with React. But it doesn’t come stock with Rails! So I suspect some of you might still be spamming refresh in your browser. Save your keyboard; automate it!
It’s such an important tool in keeping a tight feedback loop. There are multiple gems to add live reload. My go-to gems for more than a decade have been rack-livereload
and guard-livereload
, you need both. They are a bit more complicated to install and require running a second process next to the Rails server (guard), but they work on any Rack apps, not just Rails and they make it easy to opt out of live reloading when you need just by killing the guard process.
If your app is a Rails app, you could also look at rails_live_reload
. It’s a newer Rails specific gem. I never used it, but it looks a little bit simpler to install.
MiniProfiler is a classic; it’s in the default Gemfile of a new Rails app for a reason.
Chances are you’re already using it, but did you know it can do a lot more than just showing the number of SQL queries and the time it took to process the request?
Installing stackprof
and memory_profiler
allows rack-mini-profiler
to help you find your bottlenecks with flame graphs and your memory leaks with a memory profiler.
Something that is not immediately obvious: rack-mini-profiler
works even if your app is an API. It collects request data until you make a request to an HTML page. It doesn’t clear its buffer on every request. Just keep an empty “debug” HTML page around or hit your 404 page if your app is a Rails app. (See this StackOverflow issue about rack-mini-profiler and APIs.)
Debugbar is pretty new, so I don’t have much experience with it, but it looks pretty neat, and I will definitely add it to my next project. It is Rails-specific and has significant overlaps with rack-mini-profiler
, but I am excited to see it expand (pretty rapidly!) and it has that new car smell.
I’ve used both pp_sql
and NiceQL for a long time on various projects now. Both will help you prettify your SQL queries in your logs and in the console.
They are really helpful when trying to understand a complex query. I would not necessarily recommend them in production. In my experience, they affect performance negatively, but they are perfect for development.
NiceQL is not specific to Rails/Active Record. To integrate it with Rails, use RailsSQLPrettifier instead (a wrapper).
NiceQL does syntax highlighting, which is pretty nice! However, it requires a bit of configuration to act on your logs.
pp_sql
does the logging out of the box but lacks the syntax highlighting (which might be a feature depending on your use case).
Before:
pp_sql
:
NiceQL:
Hirb is great when inspecting elements in the console. It’s a mini view framework for IRB/Console. It can handle displaying information in tables and pages. It’s not quite powerful enough to build a full fledge TUI application, but it’s really useful for quickly inspecting data in the console. Say you want to print the attributes of the last 10 signed in users. Hirb would let you display them as a table instead of a bunch of long lines, It makes it a lot easier to visually parse information. It’s not Rails-specific but comes with Active Record support out of the box.
Before:
After:
This one is Postgres-specific, but if you’re not running Postgres… well, you should.
PGHero offers a dashboard that will help you manage your Postgres database. It suggests indexes, helps you find N+1 queries, and identifies your slow queries.
I think this gem should be available out of the box in Rails. Rails already gives you timing and allocation information; query_count
adds the number of queries performed to your logs and lets you assert on the number of queries in your tests. Here’s the last line of a request log as an example:
Completed 200 OK in 75ms (Views: 36.3ms | ActiveRecord: 1.6ms | SQL Queries: 2 (0 cached) | Allocations: 63218)
This one is not a gem, but eh, did you know you can hit the /rails/info/routes
route in Rails and get a list of your routes? No need to call bin/rails routes
. Plus, it’s searchable, and you can toggle between the path and URL helper. Great when you need a quick copy-paste for your views.
Scenic lets you create versioned database views through the Rails migration system and use those views as models as if they were regular tables.
With Scenic, the business logic of very complex pages can be extracted to an SQL query, making the actual Rails code very simple and close to the default CRUD scaffolding (fetch the model, show the model).
F(x) does the same but for database functions and triggers.
If you ever felt the need for an escape hatch from Active Record, the need to run a manual query, take a look at them. They are great! Probably my two favorite gems.
Ahoy and Blazer are a pair of tools that allow analytics and data visualization respectively. Before you pull out the big guns when it comes to analytics and data visualization, take a look at Ahoy and Blazer. They might just do what you want for free!
Ahoy (analytics) is powerful because it lets you join across your analytics and your models, without having to send everything to a separate database/data lake. By default it tracks request events groupped by visits and integrates with devise to automatically tag the relevant user on the events. It’s super easy to start tracking custom events from either Ruby or JS.
Blazer (data visualization) is pretty basic, but it’s hard to have an easier setup than that. It lets you make read only queries to your DB and display the results as tables or graph. You can then save or group thoses as dashboards.
Depending on the size of the project, they can bring you pretty far before you outgrow them. After all, they are good enough for Instacart!
Arguably the most famous gem to come out of Test Double. Rubocop is a massively configurable linter for Ruby. There is so many options that it leaves a lot of room for bikeshedding and decision fatigue.
What if I just want my code to look clean and not to flip flop between double and single quotes every other lines without spending any time in the config?
With Standard you get Rubocop, without any of the configurability! Which might sound like a joke, but (somehow) it’s great! 15M+ download so far, so I must not be the only one finding value in having an easy no nonsense config for a linter.
Have you ever wanted to write a small website without pulling in the whole of Rails? Middleman is for you!
Middleman is a static website generator written in Ruby. If you are used to the Rails view layer, you’ll feel right at home.
When working in a Rails app, Active Support adds a lot of extensions and utilities to Ruby. Just by habits, I often try to reach for them when working on pure Ruby scripts, only to realize it’s not there. Wouldn’t it be nice if you could get those utilities and extensions without pulling in all of Rails? Well, you can!
I use it often in scrappy one-off scripts. It has come in handy multiple times.
Take a look at the Stand-alone section of the Active Support guide for instructions on how to use it outside of a Rails project
There are tons more Ruby tools and gems I could add to this list. Some of them I reserved because they deserve a whole post to themselves.
Anything you’d add to this list? Join the conversation in the N.E.A.T. community. (Read about the N.E.A.T. community here.)
]]>Struggling to meet deadlines and ship quality products fast enough?
The knee-jerk reaction might be to throw more people at the problem — but more than a decade of experience with more than 1,000 projects tell us that can be a recipe for disaster.
Here’s a reality check: Smaller teams move faster.
You can achieve more with less with simpler processes, simpler communication and fewer cross-team dependencies – which is why smaller teams translate to speed-to-value and agility.
Larger teams, on the other hand, cause more communication complexity and a higher risk of eventual layoffs. They also cost more from a sheer economic standpoint.
“We hear of teams increasing their engineering personnel by 10, 20, 50, even 100s of developers. It’s so common at this point it’s become a badge of honor and a means for proving the value of an organization for the next round of investors,” Test Double CEO and co-founder Todd Kaufman said. “There’s only one problem with this line of thinking: it’s wrong.”
Just look at when Facebook purchased WhatsApp for $19 billion: WhatsApp had more than 500 million users — and only 35 engineers.
Or look at Amazon, where Jeff Bezos instituted a “two-pizza rule”: He would not set up or attend a meeting if two pizzas won’t feed the entire group. Smaller teams are more efficient and agile.
Or in one of our own recent client engagements, we implemented a service change that improved revenue by about $1 million per month – and it took only three consultants and a couple weeks.
There’s an entire book dedicated to this phenomenon for good reason. In “The Mythical Man-Month,” seasoned software practitioner Fred Brooks challenges the conventional wisdom that adding more manpower to a software project speeds up its completion.
Our approach: Helping companies achieve more with less – with an emphais on optimizing and simplifying code, automation, and testing. With the addition of product management consulting to our portfolio, we can also help you focus, streamline and develop with the highest value efficiently.
Still skeptical? Read more on why smaller developer team move faster.
(Prefer a personalized conversation? Contact us now).
As your team grows, there are more tasks to delegate, more outcomes to deliver, more interests to consider and more communication channels to manage.
Communication complexity increases exponentially with each additional person because each new member can communicate with every other existing member, not just one.
Even adding one more person adds several more communication lines, reflecting how intricate and multi-layered human interactions are within a group:
Overstaffed organizations breed complexity, communication hurdles, and a looming risk of layoffs.
A bloated staff tends to create features that aren’t requested, needed or valuable, bringing incidental complexity into play.
Whether complexity is essential or incidental, it adds to the carrying cost of software. When simplicity goes away, businesses are also forced to invest in other areas like documentation, training, and customer support.
“Doing low-priority work costs the business twice: paying people to build things it doesn’t need, sure, but also paying higher maintenance costs on the existing things it really did need. That’s because, as is often forgotten: as complexity goes up, maintenance costs increase in super-linear proportion,” said Justin Searls, co-founder of Test Double.
When demand eventually dips – and it always does – and result in layoffs, it inflicts psychological and brand damage. It alo increases the risk of former employees who could be potential detractors.
Unless you’re working with a good product manager, chances are high that you’re churning through a lot of processes that aren’t efficient.
A good product manager brings clarity and strategic focus to help streamline business decisions around software investments. They help companies generate continuous revenue with technology investments.
Before you staff up, pause and consider these questions:
Test Double has led software development and product management consulting for more than a decade, from startups to Fortunate 100 enterprises. We can offer clarity and understanding of the most valuable things to work on so you can have a simple and efficient process that drives business results — then work shoulder-to-shoulder with you to execute.
Contact us now for a free consultation.
We get it. Legacy codebases are often like tangled mazes, with convoluted logic, outdated practices, and patches upon patches. Navigating through such complexity can be akin to searching for a needle in a haystack, making even simple changes a Herculean task.
Years of band-aid fixes, quick hacks, and expedient solutions accumulate as technical debt in legacy systems. This debt accrues interest over time, slowing down development, increasing the risk of bugs, and impeding innovation.
A rewrite sounds like a clean slate and a chance to architect the new system from the ground up, incorporating modern design patterns, technologies, and best practices.
But rewrites have a bad reputation for good reason. In most cases, the most expensive thing you can do in software is rewrite an entire system. It’s time-consuming and often causes more headaches and risks than it’s worth.
The alternative is to unwind your legacy systems instead. Sometimes, incremental improvements or targeted refactoring efforts can achieve similar outcomes as a rewrite – but with far less disruption and risk.
If you’re considering a rewrite of your legacy codebase, it’s crucial to carefully weigh the costs, risks, and benefits before investing.
Over the last 12 years, Test Double has helped clients with both solutions: renovating a legacy codebase and large-scale rewrites. Here’s some of what we’ve learned.
(Prefer to skip straight to a phone call or personalized advice? Contact us now.)
There are exceptions to every rule, of course. There are good reasons to rewrite, and there are times when it’s the strategic business choice.
Some of the reasons it may make more sense to rewrite include:
If the system is using outdated technologies, and it’s difficult to attract, onboard and retain developers who can support those technologies.
If there has been significant change in or expansion of the project’s scope over time, to the extent that the original codebase no longer aligns with current business objectives.
When the accumulated technical debt (due to years of band-aid fixes, quick hacks, etc.) becomes so substantial that it outweighs the value of the underlying business logic.
A great example of when a rewrite made sound business sense:
In order to power its next generation of products, Cars.com decided to retool its systems for a modern cloud infrastructure and to turbocharge development with Elixir and Phoenix. We helped them with that rewrite, which enabled them to realign their technical team with a more modern software development culture.
A rewrite doesn’t seem like it should be that hard. After all, you have the source code you can look at to understand how the old thing worked and what it produced.
Here is a rundown of why rewriting is actually one of the hardest jobs in software:
Complexity overload: Software systems tend to become more intricate over time. They’re like a tangled ball of yarn; trying to untangle one part often knots up another. This complexity makes it difficult to foresee all the repercussions of changes, leading to unexpected issues down the line.
Legacy dependencies: Many systems rely on older technologies, libraries, or frameworks that are no longer actively maintained or well-documented. This dependency web can trap you, making it arduous to update or replace components without causing ripple effects across the entire system."
Lack of understanding: Often, the original developers who built the system may no longer be around. This creates a knowledge gap where the intricacies of the codebase are lost in translation. Without a deep understanding of why certain decisions were made, it’s challenging to rewrite without introducing new bugs or inefficiencies.
Scope creep: Rewrites have a notorious tendency to balloon in scope. What starts as a simple update can quickly spiral into a massive overhaul as stakeholders add new features or requirements along the way. Meanwhile, unused features are never culled from the system. We assume everything needs to be migrated, when the reality is that a fresh perspective on the problem can find a simpler solution.
Testing nightmares: Comprehensive testing is crucial for any software project, but it becomes especially critical during rewrites. Unfortunately, if the system is a nightmare being considered for a legacy rewrite, chances are the test suite is in even worse shape. That means little to no confidence you can perform the rewrite until you build up a set of automated tests that validate its current behavior. This can be labor-intensive, especially if you haven’t done it before. If you do find that a rewrite is necessary, Test Double uses this approach and can apply it to help lower the risk of your rewrite.
User experience disruption: For end users, the transition from an old system to a new one can be jarring. Even with the best intentions, changes in interface, functionality, or performance can lead to confusion, frustration, and loss of productivity. Further, rewrites driven by engineers based on the perceived functional needs of the codebase are risky. Product managers need to lead the effort to clearly define the functional requirements needed in the new platform. This is critical to cull unused features, so that the codebase is simple and maintainable.
Code area lockdown: Rewriting a complex element often requires a development freeze on that specific area of the code. The intricacies of the overhaul make it challenging to integrate multiple changes concurrently. This means other potential enhancements or bug fixes in that region have to wait, causing the evolution of the overall system to stagnate.
Software standstill: Pausing the development of existing systems to focus on a rewrite can feel like slamming the brakes on your revenue engine. This interruption in delivering new features or improvements can impact your competitiveness and customer satisfaction.
No matter what path you decide – rewrite or remediation – it’s important to first assess what caused the system to deteriorate and address those issues before tackling the codebase.
Neither a rewrite or renovation will address the root cause. If you use the same process and culture to create a new system, you are likely going to just have another legacy codebase in the next few years. This is another reason why renovation or remediation is a longer-lasting approach.
We can help assess on a more personalized level what works best for you, your codebase, your team and your business objectives.
In the meantime, some factors to consider:
What are the main issues you want to solve with a rewrite? If it’s an inability to pivot or try new product ideas, we’ve helped clients adopt new processes like Shape Up to enable experimentation through their release process.
Is opting for a rewrite instead of refactoring a strategic move to capitalize expenses? By pursuing a rewrite, can the development effort be classified as a capitalized expense, potentially offering financial advantages? Furthermore, if functionality extraction and refactoring are integrated into broader product enhancements, can these activities still be considered for capitalization?
Is your legacy system costly to run? We’ve helped clients reduce their infrastructure spend, optimize database costs and improve application server performance to reduce operating expenses of a software platform.
Rewriting and renovating legacy code can both feel scary – but we’re here to make it as seamless as possible.
We can help assess your current codebase, challenges and business objectives to help you decide the right solution for you, your team and your business. Contact us now for a free consultation.
To help with finding the right component in MUI, here’s a quick reference of the components that I most frequently confused for one another or have trouble remembering which is which. Live code examples are included so you can play with them and see how they’re implemented.
Both “badges” and “chips” sound like they refer to small discrete things. In MUI:
Badge
is a colored circle or oval that appears over top the corner of element and usually displays a number; think of an iPhone red notification unread badge.Chip
is a bit of content with an oval around it. It can have an X icon button to dismiss it.“Tooltips,” “popovers,” and “poppers” all sound like small things that pop up. In MUI:
Tooltip
automatically appears when you hover the mouse over an item and is just for displaying a little bit of text. It has a default style which can be overridden.Popover
displays content in a box with an elevation and automatically dismisses when you click away from it.Popper
displays content with no default styling, and it does not automatically dismiss when you click away from it. (Talk about confusing names; I literally typed Popover
twice while writing this part of the post.)“Alerts,” “dialogs,” and “modals” all sound like they might represent dismissable windows. “Snackbars” is familiar if you’ve worked on Android or with Material Design. In MUI:
Alert
is a colored bar rendered in the flow of a page that displays a success, warning, or error message.Modal
is a low-level component for presenting content overlayed on your existing content, but you have to style it yourself. It handles dismissing when you click outside it.Dialog
is a higher-level component that provides pre-styled pieces for displaying a title, content, and a bar of actions along the bottom of an overlayed window. It handles dismissing when you click outside it.Snackbar
displays a message along one edge of the browser window. It can be configured to be dismissed manually, or automatically after a period of time.“Menus” and “selects” both sound like ways you can select things from a menu. “Autocomplete” sounds like it’s completing as you type, and doesn’t necessarily suggest it has a menu. But in MUI, they all have menus:
Menu
is not related to a form; it’s just a menu of clickable items, like in the menu bar of desktop operating systems. A Menu
can be presented from any element; this example shows it presented from a Button
.Select
is a form field that lets you choose an option from a dismissable list. It is custom-rendered by MUI to match Material Design.NativeSelect
is like a Select
, but it uses a browser <select>
tag under the hood. This can allow the behavior to work better on mobile, and also may be better for accessibility.Autocomplete
has a list of options, and allows you to type to choose from the list. There are a few different ways to use it:
Autocomplete
only lets you choose one option from a predetermined list. So it’s like a Select
that allows typing to choose.Autocomplete
with the freeSolo
prop allows typing and submitting arbitrary text in addition to the options listed. So it’s more like a text field that suggests options as you type.Autocomplete
with the multiple
prop allows you to choose multiple options, which will be displayed as Chip
s with an X icon button to delete them. This approach is commonly referred to as “tags”.These of course only scratch the surface of the components MUI provides. It’s worth a skim through the list of MUI components at the start of a project and periodically to refresh yourself on what they have available. It takes some mental effort to keep track of all those components, but I’d prefer that work to having to implement them from scratch myself! Happy MUI-ing!
]]>Over the last decade, schools like Harvard Business School, Cornell University’s Johnson Graduate School of Management and Northwestern University’s Kellogg School of Management all rolled out new courses and programs aimed at teaching Product Management.
And yet, despite the rapidly growing industry of product management, many businesses still don’t fully understand how to leverage product managers to support the bottom line.
“There are probably more misconceptions about product management than there are correct answers,” said Brett Buchanan, Chief Product Officer at Test Double and founder of Pathfinder Product.
Product managers are strategic leaders who steer the product from ideation to launch and beyond, ensuring its success. They are the ones who identify the customer need and the larger business objective, serve as a cross-functional liaison, and ensure the product hits its intended success metrics.
A good product manager brings clarity and strategic focus that will streamline product delivery, help companies generate sustainable revenue from products or reduce costs for the business.
“Most companies are investing material amounts of money into technology,” Buchanan said. “However, they are not seeing the impact that they hoped for. A good product manager is going to help maximize your investments."
Last November, Test Double acquired Pathfinder Product to create comprehensive end-to-end solutions in modern software creation and product management. Pathfinder Product has led product consulting at an impressive lineup of companies, including Kroger, Lowe’s, Levi’s, Procter & Gamble, Intuit Mailchimp, OhioHealth, and Highlights.
With a team of seasoned product managers now integrated into Test Double for comprehensive software and product consulting, let’s review the fundamentals of product management – including what is a product, what a product manager is not, and the role of a (good) product manager.
Perhaps the biggest misconception to clarify first: A product is not just a set of features or a list of functionalities. A product is something that meets a specific need or fulfills a particular purpose for its users, with KPIs to own and optimize.
That can range from an item you hold in your hands to something more intangible, such as:
Physical items – iPhone, Kindle, Coca-Cola An app, platform, software or service – Etsy, Instagram, Google’s search engine, Gusto Internal workflow products – SalesForce CRM, inventory management systems
Buchanan cautions against getting too caught up on defining what is a product.
The more helpful questions to ask, he said, include: Does this have specific KPIs that need to be achieved? How will those KPI’s lead to commercial or strategic success of the business? Those are the conditions where product managers thrive.
Product managers are often mistaken for project managers or product owners. While there might be collaboration or even overlap, these are distinct roles.
Product managers are the strategic thinkers who set the vision, goals and trajectory of the product. They develop the business case and roadmap for the product, making sure it aligns with both business goals and user needs. They are responsible for the measurable outcomes of the product.
Project managers plan for successful execution and delivery. They’re more of the do’er, leading the who, what and when tasks that achieve an outcome. If a product manager is the architect sketching the blueprint, the project manager is the foreman overseeing construction.
Product ownership is the role you play on a scrum team. The role typically includes activities like: defining the product backlog, prioritizing that work, and creating actionable user stories for developers to make sure the work fulfills the criteria.
Finally, one more misconception: While product managers do own the strategic goals and problems, they do not determine the solution. The cross-functional team determines the solution through a collaborative discovery process.
They also don’t have the authority of a CEO or business owner and cannot hire and fire (although leaders like a senior director, head of product or VP of product might have that authority).
Think of the product manager like the conductor of the tech orchestra.
They harmonize the efforts of developers, designers, and stakeholders to create a symphony of features that not only meet user needs but also hit the right business notes.
Good product managers know how to prioritize work against clear, outcome-oriented goals, to define and discover real customer and business value, and to determine what processes are needed to reduce the uncertainty about the product’s success.
The exact responsibilities vary a bit from company to company depending on things like B2B vs B2C, post- vs pre-product market fit and end user – but could include:
Ultimately, product managers’ success is measured by their ability to move the needle on measurable outcomes (KPIs). Product managers need to figure out how the product can produce commercial results for the company. A good product manager works to understand the bullet points above and how the product fits into it.
We provide flexible product management consulting options – including burst capacity for strategic initiatives or product coaching for leaders navigating in the midst or organizational churn.
We also offer 30-minute turbocharged sessions with one of seasoned product leaders, tailored just for you. No cost, no strings attached – just pure, unadulterated brainstorm power. Request a consultation now.
]]>It was Friday afternoon. I had taken a class in Microsoft SQL Server years before, on a version years out of date. I’d never really used it in any real projects. And this phone call came from five hours away, in another state.
But I was unemployed. So I said, “Sure thing. See you Monday morning!”
Then I ended the call, got in my car, and drove an hour to the nearest Borders bookstore. I purchased two promising books on Microsoft SQL Server, went to the bookstore’s in-house Starbucks, purchased a venti iced coffee, sat down with those two books and a legal pad, and mapped out my weekend in fine detail. It came down to 15 minutes for this chapter, 10 for that chapter, skip this other chapter, etc. Then I drove home and followed my script meticulously for the whole weekend. This was not easy for me; I’m a curiosity-driven learner who loves to follow a thread and go deeper. Not this weekend, though. I stuck to the plan, and on Sunday night I got back in my car and started the long drive to my new gig.
A similar thing happened when I started my career at Test Double. At that point I had been working with Java for years and hadn’t used Rails much at all during that time. In this instance, I was serendipitously a couple of weeks into a refresher with Rails. Nevertheless, on my first day, I found myself pair programming with Test Double’s co-founder and CEO, Todd Kaufman. No pressure!
So how did I get the chutzpah to jump into these significant career opportunities? Am I arrogant or sublimely self-confident? Well, I don’t think so. I don’t consider myself exceptionally intelligent; based on observation of my many illustrious peers, I’ve always felt that I’m having way too much fun with an average-intelligence brain. I do suffer from imposter syndrome. So how does one gin up the moxie to say, “See you on Monday?” Well, here are a few thoughts on how to do that.
When I said yes to starting work as a SQL Server database administrator with only a weekend to prepare, it’s worth noting that I’d been working with databases for years at that point. I had a pretty good command of SQL, mostly the MySQL flavor. So I knew there would be commonalities. I also knew there would be differences and that I could use those deltas as a sort of knock-out pattern that could help me to mnemonify those distinctions.
In the Rails example, I’d done work with a prior version of Rails and had been leveraging that understanding in my work with Java. I compared my prior learnings of Rails to Java Struts as well as a couple of other Java web frameworks I worked with. When I came back to Rails, I brought all of that with me, giving my new observations of the then-current version of Rails a stronger contextual foundation.
In all these cases, I was able to use what I’d previously learned to develop instincts for the probable design decisions people had made when designing these unfamiliar tools. A crucial part of this is in getting an understanding of the problem which the designers were trying to solve. This can help in making better guesses about features which might exist, and even about what form they might take.
In both examples I described above, my goal was to get a footing in unfamiliar territory quickly. This meant I had to deny my innate tendency to let curiosity drive, to follow every thread, and to dig deeply. Instead, I ruthlessly time-boxed my study in order to prioritize breadth over depth. This is important to note; depth is certainly valuable. But I knew I had a natural tendency to pursue depth, and I also knew that what I needed in these instances was higher level over-arching perspective.
When it came to prepping for Rails work, I focused on subsystems that I didn’t feel I understood well, approaching each of these in the same time-boxed way. I’d allocate a specific amount of time to study and learn routing, for instance, developing a good sense for the problem it’s meant to solve, and the design values used to create it. And when the time was up, I’d rip myself away and move on to the next thing.
Maybe it was the spectre of unpaid bills piling up. Unemployment does introduce a certain kind of boldness, after all. Whatever it was, somehow I recognized in these situations (and at other points in my career), that I didn’t have a sensei who would assign to me the perfect number of menial preparatory tasks and then eventually say, now you are ready.
It’s a little bit scary, I’m not gonna lie. The desperation of need does factor in, but it’s also important to think about your own personal standard for readiness. For me in these example cases, that involved getting a mile wide and an inch deep on the particulars and distinctions of SQL Server relative to my other database experience, or working through a set series of specific Rails subsystems. Advice on readiness abounds, but you’ll still always have to make the call yourself.
It’s also important to plan how you’ll represent and defend your readiness. Personally I knew I could not (and would not) outright lie about my level of experience. But I also knew that a certain amount of pre-gig preparation is normal, especially for someone shifting focus to a near-related kind of work. I also did what I could to understand my client’s needs and expectations, and to evaluate my own chances of success. Had they asked me to help with Oracle databases instead of Microsoft ones, for example, the conversation might have been shorter.
On that fateful Monday, my first day on the job as a Microsoft SQL Server DBA, I happened to overhear a couple of colleagues discussing the difficulties they were having with getting a large quantity of data in a CSV file imported into the database they were working on. They were considering whether they needed to write a small program to do this task.
“Hey,” I piped up, “have you considered using bcp
?”
“bcp
?” they inquired.
I hadn’t spent much time on the chapter which talked about SQL Server’s “bulk copy program” command-line utility, and three or four days before that, I hadn’t even known it existed. And I’d only spent scant attention to that chapter. But it made sense that such a thing should exist, as moving bulk data into a database doesn’t seem like a weird or fringe use case. I reached over to the book shelf in my cubicle and grabbed one of the two brand new SQL Server books, and thumbed over to the chapter on bcp
. And just like that, I was hilariously established as a SQL Server guru.
This kind of thing happens because in software work, nobody knows everything all the time. Someone brand new to the field may well have fresh access to bits of expertise which are disused or unknown to long-experienced experts. It is hilarious, and the only appropriate response to this hilarity is humility. If you’re one of these deeply experienced folks, expect the moment when someone new to the field hands you a key to the problem you’re pairing on. And if you’re new to a particular domain, take heart: Knowledge in the field of software development is not a narrow hierarchy, it’s a vast matrix. Gaining knowledge and expertise is not a matter of climbing a ladder, it’s more like spreading out over a mountainside, searching together for clues. None of us actually knows who will be the next to shout “Eureka!” Maybe that’s scary. Or maybe it’s just part of the fun.
]]>Offshore development centers are now sprawled across China, Malaysia, Pakistan, the Philippines, Mexico, Chile, and beyond. The promise: low-cost services from an army of programmers.
But is it truly a budget-friendly savior?
The price tag on software development goes far beyond the developers’ hourly rate. There’s the cost of time spent on communication, management, and approvals. There’s the quality of the software developed, the time it takes to develop and its impact on the business objectives. The maintenance costs required to support the software after its launch, along with the stability both during development and in the post-launch phase, also contribute significantly to the overall equation.
It’s crucial to consider that more comprehensive picture of software development to understand the real cost and value of your investment.
Now, if your primary goal is to get the cheapest possible price for your upcoming software project, then Test Double is probably not the right partner for you.
Our work goes well beyond shipping a product, with a deep commitment to quality and value. Our consultants drive improvement in collaboration, processes, and workflow. That kind of value does not come at the cheapest hourly rate. So while we are not the lowest cost, we do promise the highest value. (See more about our approach to fair contracts, weekly pricing, and open-ended contracts.)
That said, over more than 12 years and 1,000 projects in software consulting, we’ve been called into clean up messes, including those stemming from failed offshore collaborations.
So we can shed light on what to expect from working with an offshore software development agency to help you better evaluate the return on investment.
Collaboration is tough when you go offshore. Time-zone differences, language barriers, and culture differences can all slow down the process. Expect a two to three-week ramp up period, and build in additional delays during each round of feedback or reviews.
Before you sign with an offshore firm, consider what, if any, overlap hours are available for collaboration, feedback, and approvals.
Whether it’s unpredictable geopolitical situations or the potential for communication gaps, these inherently risky landscapes may lack the robust IT infrastructure and resilience measures crucial for safeguarding against unforeseen challenges.
Your customer data, financial information and system libraries are made available to a foreign company that is not subject to U.S. laws. That has added complications for financial services institutions, healthcare organizations, utilities and other organizations that face varying degrees of government oversight.
So choosing to work with an offshore agency demands attention to not only the technological aspects, but also the strategic planning to ensure compliance and uninterrupted business continuity.
Consider the work that you’re willing to hand-off to an offshore agency. It should be well-defined and rote, without much room for misinterpretation.
If you’ve already put in the work to clearly define the project and all you want is a software developer to execute, then an offshore agency might work for you. (Even then, you still have to take extra care to clearly specify the end solution, while also reviewing what is produced to meet quality standards. Remember: Your offshore partners likely have little understanding of your business strategy.)
If you want consulting, innovation or complex problem-solving along the way, though, then an onshore agency like Test Double will offer a better return on your investment. As Eddie Kim, the co-founder of Gusto, once said: “Give Test Double your hardest problems to solve.”
In our extensive experience, we’ve observed that the lion’s share of software development costs resides in maintenance—as much as a staggering five times more than the initial build.
The way to minimize the maintenance cost in the long run is to invest in higher quality software development, with quality assurance and test-driven development processes built into the process.
So what is the experience of the developer(s) you will be working with? Will the programmers be pairing with each other and/or with your team? Who will own testing and QA?
The turnover at offshore software agencies tends to be high – up to 40 percent per year, according to the National Association of Software and Services Companies.
In contrast, at Test Double, we don’t hire entry-level developers. We bring on experienced software consultants who have a track record of success, and we work to keep them. We have a test-driven development approach that enables far faster iterations from development to production.
When it comes to quality, a really good programmer is also going to be able to get a lot more done than five or even 10 average programmers. (There’s even a name for it: the 10x developer.)
And a pair of really great programmers is worth even more, because they work so much better when pairing. Two minds working together can often find solutions that are better and faster than one. They’re also more likely to catch errors or bugs earlier, resulting in higher quality code and fewer issues later, and it’s key to knowledge transfer between developers.
As Robert Ross, CEO at FireHydrant, said: “Some consultancies are trying to just get code out the door and are designed to write stuff that will last for maybe the next 6 months. Test Double thinks deeply about problems and delivers sustainable solutions.”
In our experience, it’s tough to transition away from offshore consulting. It’s not necessarily the fault of any engineer. It’s just that offshore companies are not able to provide the same level of support and ongoing management. So the likelihood of a (costly) rewrite is much higher.
Questions to consider before working with any software agency, including an offshore agency, that will help you maximize value and minimize future maintenance costs:
Are they documenting their work as they go through so that your team can extend it?
Did they build in a level of automated testing so that changes made by your team in the future are likely free from defects?
Have you considered how you will transfer the knowledge back to your team?
Did this spark more questions? Want to chat more about our unique capabilities to deliver high-quality software for a better return on your investment? Contact us now for a free consultation.
]]>In this tutorial, we’ll:
I say “optionally” for the last two steps because if you aren’t ready to register a custom domain or set up SSL, you can stop before that point and the application will still work.
Note that we won’t be covering database setup. This is just because I didn’t run across that myself because I’m using a pre-existing database on another service. Amazon does have an RDS service for providing relational databases like Postgres and MySQL. Also, although the demo app here is a Rails server-rendered app, these steps would work the same for Rails APIs such as REST or GraphQL.
Note that this doesn’t cover more advanced server configurations such as multiple instances, test and staging environments, and zero-downtime deployments.
Another important note: AWS costs money. The instance size we need to use for Rails is too big for the free tier, so you’ll be charged for it, and there’s a limit to how many build minutes can be run in the free tier as well. So keep an eye on your usage, and spin down the application if you aren’t using it.
This tutorial was written in February 2024 using Ruby 3.2 and Rails 7.1.3.
Go to the AWS Management Console and sign in or sign up. In the Search box, type “Elastic Beanstalk” and click the Elastic Beanstalk link.
From the Elastic Beanstalk home page, click “Create application.”
You’ll be taken to the “Configure environment” screen. Under “Environment tier,” make sure “Web server environment” is selected. Under “Application information,” type a name for your application; I’ll do “rails-eb”.
Under “Platform” > “Platform,” choose Ruby.
Under “Application code,” make sure “Sample application” is chosen; we’ll use the pre-provided Ruby sample first, then set up uploads via CodePipeline later.
Click “Next.”
You’ll see the “Configure service access” screen. You may see some pre-existing service roles preselected; if not, choose “Create and use new service role.”
Click “Next.”
On the “Set up networking, database, and tags” screen, scroll to the bottom and click “Next.”
On the “Configure instance traffic and scaling” screen, scroll down to “Capacity,” then find “Instance types.” You should see “t3.micro” and “t3.small” listed by default. Click the X next to “e3.micro” to remove it so that only “t3.small” is shown. When I tried to run a Rails app on a “t3.micro” instance, the bundle install
step would hang, so I found that “t3.small” or larger is needed.
Click “Next.”
On “Configure updates, monitoring, and logging,” scroll down to “Platform software.” Under “Instance log streaming to CloudWatch logs,” set “Log streaming” to “Activated.” (Note that this is different from “Health event streaming to CloudWatch logs” further up on the page.)
Under “Environment properties,” we will need to add a SECRET_KEY_BASE
value for Rails. One way to generate one is to run irb
in your console and run the following commands:
irb(main):001> require 'securerandom'
=> true
irb(main):002> SecureRandom.hex(64)
=> "(a 64-character string)"
Under “Environment properties,” click “Add environment property,” then type “SECRET_KEY_BASE” in the “Name” column and paste the secret key base value you generated into the Value column.
Click “Next.”
On the “Review” screen, scroll down and click “Submit.”
You’ll be taken to the screen for the new environment that was created. Yours will probably be the name of your application with “-env” on the end. Mine has a “-1” as well (“Rails-eb-env-1”) because I previously created a “Rails-eb-env” while writing this tutorial.
You’ll see a message that says: “Elastic Beanstalk is launching your environment. This will take a few minutes.”
You can watch the launch process at the bottom of the page under Events. Eventually, you should see the message “Environment successfully launched.”
At the top of the page, under Environment overview > Domain, you’ll see a URL ending with .elasticbeanstalk.com
.
Click it, and you should see a page that says: “Congratulations. Your first AWS Elastic Beanstalk Ruby Application is now running on your own dedicated environment in the AWS Cloud.”
Let’s confirm that our logs are working too. At the top of the page in the search box, type “CloudWatch”.
Open the CloudWatch link in a new browser tab so that your Elastic Beanstalk tab stays open too.
In CloudWatch, click “Logs” > “Log groups,” then look for a line that includes the name of your environment—in my case, it added “-env” to the name of my app, which was “Rails-eb”. You should see several files for that environment; look for the one ending in “/eb-engine.log” and click it.
(If you don’t see it, you may have missed the “Instance log streaming to CloudWatch logs” checkbox in application setup; if so, you can go to your “Environment” under Elastic Beanstalk, then “Configuration,” then “Updates, monitoring, and logging,” click “Edit,” and activate those logs.)
On the eb-engine.log page, go to “Log streams,” then click the link you see.
You should see a log of deployment output.
This will be helpful to watch for future deployments; leave this browser tab open.
So, we have an Elastic Beanstalk instance running the sample code. Next, let’s get it running our own code.
To do so, let’s create a sample Rails app to run. You may be tempted to use your real Rails app, but I’d encourage creating the sample app first instead. Your app may need additional setup that can cause errors, so let’s take the agile small step of deploying a trivial app first. I’ll keep it quick!
First, we need to make sure we’re running a version of Ruby that runs on Elastic Beanstalk. Go to Elastic Beanstalk’s Supported Platforms page and check the version of Ruby listed; it’s 3.2.2 as of this writing.
Next, check the version of Ruby you have running locally. Here’s the result for me:
$ ruby -v
ruby 3.2.2 (2023-03-30 revision e51014f9c0) [arm64-darwin22]
If your version of Ruby is newer than the one Amazon lists, install 3.2.2 via a tool like rbenv
.
Once you have the right Ruby version, run rails -v
and make sure you have Rails 7.1.3 or later installed. If not, run gem update rails
.
Next, create a new Rails application with all the defaults:
$ rails new rails-eb
After the new command is done, create a welcome page:
$ cd rails-eb
$ rails generate controller WelcomePage welcome
Replace the contents of app/views/welcome_page/welcome.html.erb
with:
Rails on Elastic Beanstalk
Then, in config/routes.rb
, add the following:
Rails.application.routes.draw do
get 'welcome_page/welcome'
+ root 'welcome_page#welcome'
...
Test this by running your app with rails s
, then going to http://127.0.0.1:3000
. You should see the “Rails on Elastic Beanstalk” message you entered.
There’s one more temporary change we need to make. By default, Rails enforces SSL security in production. This is very good, but to make sure we can confirm our app is working before we set up SSL, we’re going to turn that off. If you do this for a real app, make sure you turn SSL back on before you send users to it!
Open config/environments/production.rb
and change config.force_ssl
to false
# Force all access to the app over SSL, use Strict-Transport-Security, and use secure cookies.
-config.force_ssl = true
+config.force_ssl = false # TEMPORARY for testing
Create a GitHub repository for your Rails app and push the code up to it.
Now we’ll set up our Rails code to be deployed to our Elastic Beanstalk instance using CodePipeline.
Open yet another new browser tab and go to the AWS Management Console. Search for “CodePipeline”, then click the CodePipeline link.
Click “Create pipeline.”
You’ll see the “Choose pipeline settings” page. Under “Pipeline settings” > “Pipeline name,” enter a name; you can call it the same as your application, which for me is “rails-eb”.
Scroll down to “Service role” and make sure “New service role” is selected; you can keep the default “Role name.”
Click “Next.”
On the “Add source stage” screen, under “Source provider,” choose “GitHub (Version 2).”
Click “Connect to GitHub” to sign in to your GitHub account and give AWS access to your sample Rails app repo. Under “Repository name,” choose your Rails app repo. Under “Branch name,” choose the branch to deploy, which is probably “main” unless you changed it.
Under “Trigger,” choose a “Trigger type” of “Specify filter.” For “Event type” choose “Push,” for “Filter type” choose “Branch,” and under “Branches” > “Include” type “main”.
Click “Next.”
Under “Add build stage,” click “Skip build stage.” This isn’t one of those compiled languages! (Running bundle install
doesn’t count as a build here.) Click “Skip” to confirm that you don’t want to be bothered with compilation.
On the “Add deploy stage” screen, under “Deploy provider” choose “AWS Elastic Beanstalk.” For “Application name” and “Environment name” choose your EB application and environment.
Click “Next.”
You’ll see a “Review” screen; scroll down and click click “Create pipeline.”
After a few seconds, you’ll be taken to the screen for your new pipeline and see the message “Congratulations! The pipeline (name) has been created.”
You’ll also see a two-part diagram showing the steps of your pipeline, and it will begin executing. Warning! It’s possible your Deploy step will fail. If it succeeds, skip down to Install Success below. Otherwise, keep reading.
If your install fails, click “View details” to see why. I’ve run across the following error message (emphasis mine):
Deployment failed. The provided role does not have sufficient permissions: Failed to deploy application. Service:AWSLogs, Message:User: …/AWSCodePipelineServiceRole-… is not authorized to perform: logs:CreateLogGroup on resource: …/var/log/nginx/access.log:log-stream: because no identity-based policy allows the logs:CreateLogGroup action
If that happens, that means there’s a permissions issue. By default, the role CodePipeline created isn’t granted access to write the logs that we said we wanted created. To fix this, we’ll need to make a change in IAM, AWS’s auth tool.
First, copy the name of the role listed here. Then, in the search bar, search for IAM then open the link in a new tab (just “IAM,” not the “IAM Identity Center”).
In IAM, click “Role.” In the search box under “Roles,” paste the role name you copied from the error message. It should match one result; click it.
Under “Permissions” > “Permissions policies,” click “Add permissions” > “Attach policies.” Search for “CloudWatchLogsFullAccess”, then next to the row that’s shown, click the checkbox and click “Add permissions.” You should be taken back to the role page, and the CloudWatchLogsFullAccess permission should now be listed in the “Permissions policies” box.
With permissions fixed, go back to the browser tab with CodePipeline. In the Deploy box, click “Retry stage.” This can take a little while, and we can watch the process in more detail in CloudWatch logs. Scroll to the bottom and click the “Resume” link to get the logs updated live. After a few seconds, you should see a “Starting…” line. It may stop at the bundle _2.4.10_ install
line for a bit. (If it hangs at bundle _2.4.10_
, you may have accidentally left “t3.micro” in your settings; if so, go back to the Elastic Beanstalk settings and configure it to only use “t3.small” instance size.)
Once the install succeeds, you’ll eventually see Platform Engine finished execution on command: app-deploy
.
Back in CodePipeline, the Deploy stage will turn green for “Succeeded.”
If you still have your browser tab open from the first time we checked the Elastic Beanstalk instance, you can reload it to see your running app. If not, go back to Elastic Beanstalk, pull up your environment, and click the “Domain” link again. You should see your “Rails on Elastic Beanstalk” message.
We’ve now got a Rails app running on Elastic Beanstalk, and each time you push commits up to the main
branch, they’ll be automatically deployed!
If that’s where you’d like to stop, we’ve made good progress. But if you have a custom domain name, we can add a custom subdomain and SSL to the app as well.
AWS has its own service for registering custom domains and configuring DNS, called Route 53. However, you can use a different DNS provider, and I kind of like keeping my domains separate from any particular hosting platform. I use NameCheap.com for my DNS, so I’ll provide instructions for setting it up with NameCheap.
To make your Elastic Beanstalk instance accessible using a custom domain, we just need to create a CNAME record in your domain’s DNS. The “host” value should be the subdomain you want to use. In my case, I own codingitwrong.com
, so I’ll create a rails-eb
subdomain, so the site can be accessed at rails-eb.codingitwrong.com
. (This won’t be running by the time you read this post, so I don’t have to keep paying AWS for that server!)
For the CNAME “value,” put the domain name of your EB instance, without http://
on the front or a /
on the end, and with a .
added to the end. For example, my instance was http://rails-eb-env-1.eba-jad9rjd9.us-east-1.elasticbeanstalk.com/
, so for the “value” I put rails-eb-env-1.eba-jad9rjd9.us-east-1.elasticbeanstalk.com.
After saving the DNS entry, depending on your DNS, it can take a little while to take effect. For me, it took 1-5 minutes. After that, I was able to go to http://rails-eb.codingitwrong.com
and see my running app. Nice!
A running app is good, but in 2024 you probably want HTTPS even if your site doesn’t handle any secure information. Thankfully, it’s not too hard to set this up with AWS: it will issue us an SSL certificate using the AWS Certificate Manager.
In the AWS search box, search for “Certificate Manager,” then open the “Certificate Manager” link in a new browser tab.
Click “Request a certificate.”
On the “Request certificate” screen, under “Certificate type,” make sure “Request a public certificate” is selected, then click “Next.”
On the “Request public certificate” screen, under “Domain names” > “Fully qualified domain name,” enter your full subdomain — in my case, rails-eb.codingitwrong.com
. Under “Validation method,” keep “DNS validation - recommended” selected. Click “Request.”
You will be taken to the Certificates page, and a message will show that says: “Successfully requested certificate with ID …. A certificate request with a status of pending validation has been created. Further action is needed to complete the validation and approval of the certificate.”
Click the “View certificate” button.
You will see the page for your certificate. Under “Domains,” your domain will be listed with a status of “Pending validation.” You should see values in the “CNAME name” and “CNAME value” columns. If not, wait a few seconds and reload the page.
Once you have the CNAME name and value, you will need to create a CNAME entry under your domain to confirm you own it. Go back into your DNS provider and add them. Note that although the “CNAME name” has the domain suffix, in NameCheap at least I only needed to paste the “subdomain” part into the CNAME name field, so if AWS gave _123456890abcdef.rails-eb.codingitwrong.com.
, then I would only paste _123456890abcdef.rails-eb
. For the value, paste the full value ending in .acm-validations.aws.
Save the DNS record. Again, depending on how quickly your particular DNS propagates, it may take a few minutes before AWS Certificate Manager sees the result. For me, it took about five minutes. Keep reloading the certificate page, and when it’s working, you will see “Issued” for the status.
After this, you’ll need to set up your Elastic Beanstalk instance to use that certificate. Go back to Elastic Beanstalk, open your environment, and go to “Configuration.” Under “Instance traffic and scaling,” click “Edit.”
Under “Capacity” > “Auto scaling group” > “Environment type,” change it from “Single instance” to “Load balanced.” This isn’t because we need more than one instance; we just need a load balancer to set up HTTPS on. Under “Instances,” change “Max” from 4 to 1 to avoid being charged for multiple instances.
Now that you’ve changed the environment type to load balanced, further down on the page, you should see a “Listeners” section with port 80 listed.
Click “Add listener.” For “Listener port,” type 443, the standard HTTPS port number. For “Listener protocol,” choose HTTPS. For “SSL certificate,” choose the certificate for the subdomain that you just created.
Then click “Save.” There’s one more step: scroll down to the bottom of the page, then click “Apply.”
Elastic Beanstalk will take some time to set up the new load balancer. Watch the Events and wait for it to say “Environment update successfully completed.”
Now, go to the https://
version of your URL, which, in my case, is https://rails-eb.codingitwrong.com
. It may still take a few seconds before it’s ready (it did in my case), but soon, your app should be available over HTTPS!
Now that that’s working, it would be best for us to re-enable Rails force_ssl
, so that users can’t accidentally access it over HTTP. Change the config value back:
# Force all access to the app over SSL, use Strict-Transport-Security, and use secure cookies.
-config.force_ssl = false # TEMPORARY for testing
+config.force_ssl = true
You can change your welcome message, too, so you can be sure when the update is deployed:
-Rails on Elastic Beanstalk
+SECURE Rails on Elastic Beanstalk
Commit these changes and push them up to GitHub. You should see a redeployment be triggered in the Elastic Beanstalk Events.
Once the deployment is done, it’s a good idea to test that SSL protection is in place. Run the following from the terminal, putting in your subdomain (note the HTTP instead of HTTPS):
$ curl -v http://rails-eb.codingitwrong.com
If this outputs the HTML of your web page, this means the SSL protection is not working. But if it is working, you will see something like the following:
< HTTP/1.1 301 Moved Permanently
< Date: Sat, 17 Feb 2024 11:17:38 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 0
< Connection: keep-alive
< Server: nginx
< location: https://rails-eb.codingitwrong.com/
The 301 redirecting the user to the https
URL means that Rails is not serving up the app over HTTP. Nice and secure!
Now you’re all set with a Rails app running on Elastic Beanstalk with a custom domain and HTTPS. Remember that you’ll be charged for these running services, so be sure to spin them down if you want to avoid that. You can always run through this tutorial again in the future to get another Rails app set up.
If you’d like to learn more about AWS, here are some of the resources I used while getting my Rails app running:
This year, my focus is on the skills and strategies Black business leaders develop in response to challenges. Adversity becomes a platform for us to enhance our capabilities and become even stronger. Our trials force us to cultivate superpowers.
I want to emphasize that I am not diminishing the impact of racism on our history. It’s been a horrible stain on our country since the first Africans arrived on this continent. We were considered property until 1865. Legal segregation continued until 1964. Today, we’re experiencing a resurgence of racial violence that’s all too familiar.
And yet Black business leaders are kicking ass. Despite injustice and discrimination, we’re making crucial contributions and defining culture.
So, let me offer a fresh lens through which to view our collective history – not as a sequence of obstacles but as a reservoir of strength that can shape our professional success.
My single mother had the grit to navigate the corporate landscape while raising me in a society not designed for our success. Her persistence and prowess brought her success in her career and unlocked access to education and experiences that expanded my worldview.
For Black business leaders, resilience is more than merely surviving tough times. We pride ourselves on confronting obstacles enthusiastically and using resistance as fuel to keep pressing forward. We regard every challenge as a test of our resolve, whether we’re deflecting micro-aggressions, maneuvering through corporate politics, or leading projects under impossible deadlines.
Black business leaders have a knack for innovation. Our ability to produce novel solutions under pressure makes us well-suited to the tech sector’s continuously evolving landscape. For instance, Kimberly Bryant and Cristina Jones from Black Girls Code have been instrumental in empowering the next generation of Black female technologists, providing them with opportunities to excel and build careers in STEM.
Similarly, John Pasmore, with his company Latimer, is combating bias in Artificial Intelligence. Through these initiatives, Black business leaders are developing solutions prioritizing underserved communities.
Like the great Rakim said, “It ain’t where you’re from; it’s where you’re at.” We adapt to new situations with the fluidity that comes from navigating ambiguous and unwelcoming spaces. In the workplace, we’re quick learners, able to pivot strategies, embrace new technologies, and seamlessly integrate into various team dynamics.
Our journey has imbued us with profound empathy. We are capable of leading and collaborating with understanding and compassion. This emotional intelligence fosters inclusive, productive workspaces where everyone feels seen and heard.
We believe success comes from collective action and sharing goals, a concept the Black church embraced to propel the civil rights movement. The church was a place for spiritual support and a hub for organizing, educating, and empowering African American communities. The church was where Dr. Martin Luther King Jr. inspired people to unite in solidarity for justice and equality. Today, we follow this same approach when building networks and teams. By supporting each other, we achieve collective success.
Our heritage has honed our ability to communicate with authenticity. So much of what we consider American culture was contributions of Black people; look at jazz, blues, hip hop, and Gospel music. The ability to create beauty out of nothing is in our DNA. And this genius isn’t confined to the arts; it spreads across our lives. Our unconventional communication styles translate into compelling storytelling skills—whether pitching an idea, marketing a product, or engaging our stakeholders, we make an impact.
We’re no strangers to playing the long game. Our patience comes from having to navigate systemic barriers. We’re able to focus on quick wins while strategizing for long-term success. It’s about laying the groundwork today for achievements that last well into the future.
Black professionals have developed the ability to adapt our communication style to our audience. This skill allows us to integrate disparate perspectives into clear and concise messaging. Our cultural agility is an asset in global business environments, where understanding and respecting different cultural norms can make or break international partnerships.
We’ve developed the ability to imagine and work toward a better future. Even when outcomes are uncertain, our optimism is infectious. We inspire teams and organizations to strive for higher goals.
Understanding non-verbal cues, like body language and facial expressions, is critical, particularly for Black kids. From a young age, Black boys learn to interpret non-verbal cues. “Is this situation safe? Are they looking to have a problem with me? Am I in the wrong neighborhood? Do I appear non-threatening? Can they see my hands?” These questions become second nature, like a mental checklist.
This lesson hit home for me when I was 11. One morning, my buddy and I were heading to a park in Berwyn, IL, a place with a history of racial tension. Out of nowhere, police officers stopped us, acting on a call they received. They put us in their car and took us to a house we had walked past. A big dog was barking behind a tall fence. The owner, a white guy, said his sons saw us in their yard, which made no sense because of the big dog and the fence.
We hadn’t done anything wrong, so we were calm and respectful, just like our parents had taught us. When I tried to speak, the homeowner aggressively told me to “shut up.” Even as a kid, I saw where this situation was going. The guy got so angry he turned red, and a vein popped out on his forehead. His sons looked like they wanted a show. We were Black kids in Berwyn. The cops were not on our side. My friend and I knew we couldn’t argue without risking more trouble. So, we shut up. The police took us to the station and locked us in a cell for hours.
When my friend’s dad arrived to get us out of jail, we thought we might be in trouble. Instead, he gave us a reassuring nod, put his arms around our shoulders, and walked us out of the station without saying a word. We knew we had experienced a grim rite of passage.
That day, I learned the importance of non-verbal communication in navigating the world. In my professional life, it allows me to sense when the atmosphere is off in meetings or to understand a colleague’s unspoken concerns. This ability to read silent cues has become one of my most valuable professional assets.
Black professionals consistently demonstrate a commitment to values and ethics, even when easier paths might be available. This integrity builds trust and respect, laying a solid foundation for effective leadership.
Our journey has equipped us with tools ideal for dynamic professional sectors like tech and entrepreneurship. In environments where change is the only constant, our experiences give us a distinct edge. We collaborate with empathy, communicate with authenticity, and maintain our composure. Let’s lean into these strengths and bring them to the forefront of our professional endeavors during Black History Month and all year round.
]]>Rails upgrades can also feel like a nuisance: They usually take longer than expected, and they pull your engineer away from day-to-day work and feature development where their domain expertise is critical.
Worse, along the way, the upgrades can cause unanticipated breaks. You are not only changing the foundation an app sits on, but also changing the cruftiest parts of the app that no one has touched in years.
Over more than a decade, we’ve built an expertise in how to make Rails upgrades seamless, while your team continues to deliver critical features and ship new products.
That’s why companies like GitHub, Gusto and ZenDesk relied on Test Double, which has more than 10 years experience working in large-scale Rails applications.
One of our consultants was even selected to speak about Rails upgrades at RailsConf 2023.
We’ve seen the challenges often enough—across multiple Rails versions and on numerous teams—to observe a few common frustrations with major Rails upgrades that end up costing time and money.
Rails is one of the largest codebases, and there is no core committer who knows all of Rails. The guide to upgrading Ruby on Rails is more than 100,000 words.
If your team hasn’t dealt with upgrades before, they’ll need to put their daily work on hold to learn it on the job. That means it will be a slow process, as your developers learn this technical task on the job.
Your team will also likely have a higher risk of introducing regressions while doing this work themselves.
The solution:
In contrast, we’ve developed an efficient process for completing Rails upgrades in a manner that has very little chance of causing other issues. We’re the low-risk option for Rails upgrades.
Our consultants have already built the expertise to minimize risks and avoid common mistakes – allowing your team to stay focused, without disruption or slowdown of day-to-day production.
Bonus: We also have developed a system that allows for sequential upgrades from one version to the next, because we want upgrades to carry forward without our involvement.
The changes demanded by an upgrade tend to be so broad-based and sweeping that it can feel necessary to halt all feature development until the upgrade effort is complete.
When making incremental progress migrating your code to a new API, it’s also easy for that progress to be unmade by other team members who might not yet be privy to each new way of doing things that’s been necessitated by a later Rails version — leading to breaks and more delays.
Alternatively, an upgrade that’s sequestered to a long-lived branch will inevitably devolve into a pressure cooker as patience wears thin and as merge conflicts become increasingly severe.
The solution:
Outsource the Rails upgrades. This may not sound like an ideal investment to make, but here’s why it makes business sense:
Companies are best served keeping their institutional knowledge focused on feature delivery, as the feature delivery within their systems require in-depth knowledge of their industry and business.
A company that uses a trusted partner to do things that are outside of their core competency can better leverage their investments in their own engineering staff.
Similarly, Rails upgrades are most efficiently completed by companies who have a history of completing them, successfully, with some of the biggest Rails codebases in the world.
Deploy your team’s deep expertise on the problems they are best equipped to solve. If you needed a new mobile app built when you have a largely backend/API engineering staff, you wouldn’t wait for a few of those engineers to come up to speed while building it. You’d instead find the expertise you needed for mobile elsewhere. Do the same for Rails upgrades.
Ruby on Rails expertise and codebase renovation is a sweet spot for Test Double.
Along the way, our experienced consultants can radiate back a fresh perspective on your system, your process, and your teams. We believe in leaving teams better than we found them.
While at Gusto, our team fixed 1,700 test failures from the initial upgrade, protected the behavior of the existing code, and rolled it out on time and under budget. Gusto software engineer Daniel summed up our work:
Test Double brings a lot of good experience and knowledge—not just with the project objectives, but also with any other issues they see. They have been able to weave in code improvements when time is available, striving to leave it in a better state than they found it.
Contact us now for a free consultation on our approach to Rails upgrades and if it makes sense for your team.
]]>No one likes the actual upgrade to the latest Rails version. It can be a long and tedious process that costs time and money with seemingly nothing to show for it. Changes can be so broad-based that it brings all feature development to a halt, causes unanticipated breaks, or disrupts day-to-day work.
So, when budgets are limited, can you just skip the Rails upgrades? Or wait until the next major release and do all the updates at once?
In general, no. Skipping your Rails upgrades creates a serious security risk. And the longer you wait the more likely something will go seriously wrong.
Upgrading to the latest Rails version is also a strategic move to remain competitive, protect platform stability, and enable quicker releases of new features. Skipping them becomes a form of taking on technical debt.
(Now, there can be nuance to this: There might be times where it’s necessary to defer the upgrade. For example, if limited budgets force you to choose between a minor Rails upgrade or a key feature that unlocks time-limited revenue potential, it might make business sense to defer your Rails upgrade. Teams that routinely choose to defer or skip maintenance, though, tend to delay other things that are quality-related – increasing the risk of attrition of the engineers who care most about quality (aka your best engineers.))
Test Double has more than a decade of experience in leading Rails upgrades – including for some of the large codebases, like GitHub, Gusto, and ZenDesk. We’ll break down in more detail why it’s so important to upgrade to the latest Rails version.
Bug fixes and security patches are only included in the most recent version of Rails. If you face bugs and security issues on an older version, you’re on your own, according to the Ruby on Rails maintenance policy.
Remember when Equifax was hacked in 2017? It was one of the largest data breaches in American history, exposing the personal data of 147 million people. The breach was announced 6 months after Apache Struts released an update with security patches. Equifax had ignored the update. The breach ended up costing the company $425 million.
That’s an extreme example – but it serves as an important lesson for all of us: Just like insurance, upgrades are an important investment to protect you if things go wrong.
Unfortunately, failed security audits would leave you with few remediation options, which will cost you more time and money than it costs to maintain your updates in real-time.
Another key business reason to keep current on Rails versions: It’s important to both maintain your current compatibility and enable your future feature development.
When you stay on an older version, though, the rest of the development world moves on without you:
Ruby, Elixir, and JavaScript are all powered by volunteer contributors. They dedicate their free time to create something new for all of us. They’re not getting paid for it, so they’re not necessarily devoting time to thinking about how new changes might break something that’s two years old.
Rails depends on external gems – but as the gems are upgraded, backward incompatibilities arise.
Gems start requiring new versions, blocking critical updates. Platform as a Service (PaaS) providers sunset your version and block new deployments.
Eventually, your team will want to add a new feature or try a new gem – and it won’t work. You’ll be weighed down by the outdated version you’re running. (We see this kind of error all the time with new clients who haven’t made the updates: “Bundle install stopped working months ago. Nobody can clone fresh and build the app anymore.”)
Another insight that is worth discussion: Attracting top developer talent is already difficult. Attracting developers to an outdated stack is even harder, because they don’t want to be stuck programming in the 2010s.
Upgrading to the latest Rails version also unlocks a boost in developer productivity and efficiency. It’s an investment that optimizes your teams’ skills and positions them to take advantage of the latest technology.
So, what if you just update every other version or once a year? That attempt at pragmatism is more fraught than it might at first appear.
Companies fall behind for one reason or another, then decide to catch up all at once. But here’s the kicker: The older your Rails version, the longer and more Herculean the effort to catch up.
If you’re multiple Rails versions behind, we do not recommend upgrading directly to the current version all in one big go. Instead, we highly recommend an incremental approach, breaking the upgrade into manageable chunks. (Consultant Ali Ibrahim goes in-depth on this in his RailsConf talk about Zero downtime Rails upgrades.)
It’s not just about avoiding the hassle of dealing with ancient bugs. It’s about staying relevant and agile – kind of like exercise. It’s really hard to find the time and motivation to get started, but it gets easier the more you practice. And, in both cases, it’s a necessary habit for your health.
Rails upgrades can be complicated – but our team of consultants has deep experience in efficient and seamless upgrades, so your team can continue delivering critical features and shipping new products.
We can help you plot out the best course for an upgrade based on your unique situation and what your engineering team needs to tackle daily production work.
Contact us now for a free consultation on what approach to Rails upgrades makes sense for your team.
More reading on upgrading Rails to the latest version:
]]>I continue to find a familiar pattern, whether I was a full-time employee, a contractor, or a consultant – the mentality across so many organizations is the same. When approaching software development, our mental models mimic an assembly line: “draft, plan, deliver, repeat.” However, the assembly line is misapplied to software development because we’ve lost sight of what an assembly line is the result of and what it does. Let’s deconstruct this.
I was raised by my grandfather who was a lineman for a major US automaker. Every year, manufacturing plants shut down for about a week to do line maintenance, machine updates, and sometimes refactoring of lines to start making the new year models. These annual shutdowns are the culmination of months of discovery, prototyping, experimentation, and planning to validate assumptions, test hypotheses, and scale production in factories. Teams outside the factory worked to determine changes to implement in factory logistics necessary to solve new problems for drivers, car safety, and innovate into the future. They were determining the methods to deliver end products – the vehicles and the experience of driving – reliably and repeatedly. They created blueprints, documentation, and requirements for updating the assembly lines as a result of these discovery phases.
The problem is that our collective psyche in software has misapplied where this work happens in software development. How many of our software engineers on teams feel like they are merely machines on an assembly line churning out code? This is wrong, and we must course correct this model fast.
Engineers aren’t the machines on an assembly line, but rather, they are the ones building the assembly line. If assembly lines are the automation built by factory planners to deliver vehicle experiences on repeat, then our engineers are the ones writing code to deliver software experience on repeat. That code is the assembly line equivalent, not the engineers. It’s the code that delivers an experience on repeat, but it’s only through trial and error, prototyping and experimentation, discovery and innovation that we can make this work to solve real problems. We cannot achieve this in our current model. Our current model stifles the innovation and creativity of engineers when we don’t allow them to co-discover the problems worth solving for our users, customers, and businesses. This mindset robs our organizations of the potential that our talented engineers can provide. They are the technical experts and understand what’s feasible to solve problems for our customers that drive value back into the business.
Software offers a lot more flexibility than physical hardware. Once I place a machine on an assembly line, there’s a LOT of work to change that. Software, while posing its own challenges, is much more malleable to rapid change and redeployment. On day one, unlike an assembly line, we don’t need all the details to be set in stone; we can gradually solve the problems in piecemeal fashion. Our product teams can co-discover along the way as they build, experiment, succeed or fail, and understand what code needs to be written to solve problems in a way that delights users and customers and delivers value back to the business. Rather than providing prescriptive solutions, leaders should bring engineering teams into discovery to co-create what the future will look like, and let them help drive the plan and budget.
As a product manager, I’ve seen preplanning, SAFe, and a demand for absolute certainty crush innovation and demoralize great engineering talent. The root cause is this misapplied model and a demand for certainty. It doesn’t have to be this way. If you’re a leader who is expected to have certainty, take a step back and acknowledge how many projects go off the rails and are late, over budget, and have ballooned scopes. We’ve never had certainty, but we can commit to strategies that divest from certainty.
Our strategies should focus on understanding problems as we build, learn, and reduce assumptions we make. Engineers can be trusted to help discover the right problem to solve, and allow them the freedom to explore solutions. They have the technical expertise to know what’s possible and where innovation could occur. As a leader, you can shift your mindset away from optimizing for predictability to fostering resiliency and a focus on impact; from maximizing arbitrary productivity to focusing on results that improve business metrics and drive long-term success.
The world is unpredictable. The illusion of control should have been shattered when COVID-19 changed our entire world. “Given our stormy present, the dogmas of our quiet past are inadequate,” as Abraham Lincoln said in 1862. “Our occasion is piled high with difficulty, and we must rise with the occasion. As our case is new, so must we think anew, and act anew.” As our world changes, so should our approaches. We shifted our mindsets drastically to deal with a sudden shutdown the pandemic brought, and so we should continue to evolve our mindsets. The most valuable lesson the pandemic taught is how our collective ingenuity can be manifested quickly to adapt to new environments. We have the potential to change and pivot our behaviors quickly, and our businesses and our teams need to shift their mindsets now more than ever.
The rigidity of the assembly-line model in its current application prevents our teams from dreaming bigger and better. Software is a fertile ground for creativity and imagination, and our engineers are visionaries whose potential is untapped when given the freedom to co-explore problems and iterate on solutions. Rather than treating them as interchangeable parts, treat them as partners. They are best equipped to uncover and deliver technical feats when unfettered by prescriptive processes. Foster their ingenuity; don’t limit it. The future will be shaped by those who empower their teams to constantly research, experiment, and co-create. The opportunity lies with our talented people only when we have the courage to reimagine how we work.
We see this conclusion from thought leaders like Margaret Heffernan, Marty Cagan, and Melissa Perri. The time for change is ripe, and those who don’t embrace resiliency and plan for inevitable uncertainties will continue to lose to those who do. Our mindsets must move away from the misapplied models of the industrial age and move toward a new age that prioritizes discovery-led development. Nothing is more imperative. The solutions we build when we’re all involved in discovery aren’t just better for the sustainable long-term bottom lines of our companies, but they provide better solutions for consumers, users, and often result in better benefits for the whole of society.
Join the conversation in our N.E.A.T. forum.
]]>I joined the team in October as senior content manager – and my roots are in journalism, not tech. I spent more than 10 years as an investigative reporter and editor for the top newspaper publishing company in the U.S.
I’ve also worked with very bright engineering minds in aerospace, healthcare, and IT, and it can be tough to fully understand what they do, let alone tell the story of their work in an authentic yet relatable way. So, when I joined Test Double, I was prepared for a challenge.
It turns out the developers and engineers at Test Double have a rare gift: an ability to translate the complex world of coding and software development to a broader audience. They speak human.
They embody a unique blend of business acumen, communication skills, and empathy — a human bridge between software and business. It’s a skill set that transcends what I came to expect of engineering.
Just last week alone:
Steve Jackson contextualized for me why it’s important to stay current on your Rails upgrades by drawing a parallel to exercise (it’s an important habit to start, but overwhelming to catch up all at once). With that relatable analogy, he painted a vivid picture of incremental progress vs. overwhelming change.
Ross Brandes spotted when I was trying to update website content and struggling with code breaks, and he proactively reached out to support. He simultaneously helped fix it and taught me for next time, all with an empathetic message that he was happy to help.
Kevin Baribeau broke down how he doubled the speed of a driving directions API. He turned what could have been a dry technical guide into a compelling narrative of innovation and problem solving.
These are just a few glimpses into how our consultants are not your average developers. They work shoulder-to-shoulder with teammates, while communicating effectively to the intended audience and improving the process along the way.
I recently asked co-founder Todd Kaufman why we call Test Double a consultancy rather than an agency. The answer had a lot to do with the communication skills and proactive problem solving I’ve witnessed.
“Consultancy reflects our aspirational vision to improve the way the world builds software,” Todd said. “Our consultants can deliver product like developers at an agency do, and then they go beyond that. They see the bigger picture of business goals and processes, and they have the communication skills to radiate back a fresh perspective that can help leave teams better than we found them.”
Now I get why Test Double consultants are more than great software developers. And it’s the same reason they made the unusual move to to hire a journalist:
The company’s mission is rooted in the belief that software challenges are inherently human challenges. There’s a deep commitment to telling the story of software in a transparent, relatable, and empowering way – all in the name of our brand promise: Great software is made by great teams. We build both.
]]>We didn’t want Test Double to be like a lot of the other consultancies we’d worked at as many of them were set up for failure. They heavily incentivized a commission based sales force to do whatever they could to close a deal. That sales force would often agree to fixed bid contracts as it was the easiest path to a signature.
All of this urgency to close a deal eventually left me and the rest of the delivery teams with a set of constraints and goals that were unachievable, so we had to choose between making the project successful in terms of profit or successful in terms of quality, but never both. Time and time again, I was left having an adversarial conversation with client stakeholders when we should have been collaborating on a successful outcome.
Having these no-win conversations consistently over a number of years at varying agencies, ultimately caused Justin Searls and I to start Test Double so that we had the freedom to build something else. The beauty of starting your own company is that it comes with a blank slate for every aspect within the business. One area we debated for years was how to contract with our clients.
We had been a part of teams that did fixed bid, fixed scope, hourly billing prorated down to 15 minute increments, and even one organization that had a pay by story point delivered scheme. All of these approaches had serious issues that created bad behaviors and results for both the client and the consulting company.
Fixed bid, fixed scope engagements are so inherently bad for all parties that they warrant some discussion.
First and foremost, software estimation is really hard. No matter how many analogies we try to come up with, there is nothing quite like the process of estimation for software. It’s not the same as building a house to plan, solutions in the software industry are unique. If something is truly different, then it is often very difficult to align with prior experience and therefore estimate with any level of accuracy. People who are generally really good at software estimation can still be wildly off as much is learned about the final solution when the team delivers more functionality to end users and gathers feedback.
If a team is practicing agile software delivery methods (and they should) then they must embrace change. The cone of uncertainty is real, we learn a lot more about the ideal software solution the farther along we go in a project.
Software development is an exploratory, experimental process. We often can’t tell what features are useful, what design is most simple, or users are actually willing to pay for until we have delivered some amount of software to them. Having a fixed scope and price contract at the outset works against the goal of exploration. Fixed bid contracts presume that everything about the final solution is known up front and our experience tells us this isn’t true.
Sadly, client procurement teams think that a fixed bid / fixed scope engagement is a win for them as the total costs are known prior to selecting the right vendor. We have seen this manipulated by countless consulting companies who will tell a buyer anything they want to hear just to win a deal. These low integrity providers will lowball the original quote with the comfort that their team can litigate any feedback or feature adjustment into a costly change control later thereby bringing the project up to a profitable margin for them, but leaving the client scratching their heads, wondering why the project came in at 2.5x the cost they were quoted.
Alternately, some vendors will provide a buffer in their quote, knowing that their estimates are likely wrong. They will still push back on any changes to the approach though, in order to try and harvest as large a profit margin as possible, at the expense of the client’s budget and the end solution. If a project is running late in this model, will the consulting team still build at a reasonable pace with a high level of quality and test automation? Likely not, knowing that company profits (even personal incentives) may be at risk. Instead they will begin to cut corners, doing just enough to get approval from the client while the end users are left with software that is difficult to use, riddled with defects, and the client maintenance team is left with a solution that will be much more costly to maintain in the months and years to come. We have been steadfast in avoiding those approaches as we believe software consulting is a high trust, collaborative model.
For these reasons and more, Test Double has always focused on leveraging a time and materials model with our clients. We always strive to build with a level of quality that reduces the cost of maintenance for our solutions, knowing that maintenance costs of any long lived software solution will likely be upwards of five times the cost to build it.
In Test Double engagements we’re also striving to ensure that the teams we are working with are in a better state than they were prior to having met us. Companies in a fixed bid model, won’t take the time to collaborate with the client on a design for the system. They wouldn’t spend any time helping the client developers level up with a technology that they are using. They try to do as little knowledge transfer as possible, knowing that it wasn’t a firm requirement outlined at the time they made their bid, so it doesn’t move them closer to getting paid. At Test Double, we feel that clients need to have complete understanding of the decisions we have made, the technologies we’ve used, and the overall design of any system we’re working upon, so we take the time to communicate and collaborate with them throughout.
The approach we use at Test Double for our time and materials engagements is also slightly different than most, we work on a weekly price per software consultant, prorated down to the nearest half day. Most buyers are accustomed to hourly rates, so this also warrants some explanation. We know that context switching is the ultimate productivity killer so we strive to have every consultant at Test Double billing at only a single client at any given point.
Further, our consultants are conditioned to notify the client if they are working a full day, half day, or not at all as we don’t want to bill clients for time away where we are completely away from client work. We also don’t believe it’s inline with the level of autonomy that we like to provide all employees, to micromanage their work down to the hour or 15-minute increment. As my cofounder, Justin Searls, once said: “It’s like buying a turkey by the ounce instead of the pound, it doesn’t make sense.” Justin’s right; it’s the wrong granularity for what a client is purchasing.
Clients then ask, “How many hours will your consultants spend on my project per week?”, which is ultimately a question I can’t answer for any given consultant on any given week. We advise our team that we expect a 40-hour work week, but are not focused on micromanaging it. Some weeks may require more, some weeks may require less, but ultimately we trust our team to deliver value to our clients in line with a 40-hour work week.
In the ever-changing industry of software, we also expect our team to continue to evolve their skills, learn new approaches, and research changes within our industry. All of these things benefit our clients, but they aren’t necessarily directly related to feature delivery. Some weeks there may be less urgency or demand, so we may spend more time sharpening the axe than chopping wood.
To support this, we reserve up to 10% of our week towards individual growth and improvement. Some projects may be challenging enough where we don’t see a need for this time, while others may be a little more rote and this growth time will serve as a means for our team to continue evolving and growing regardless of client project. Our industry is constantly evolving. For Test Double consultants to remain as highly valued as they are, they need to have some space in their week to continue researching and learning about solutions we can bring to bear with our clients.
Prospective clients may review the prior two sections and feel like they are being taken for granted. We’re in essence telling them that we don’t know how many hours our team will be working on their project and some of those hours may not be heads down focused on feature delivery. How can a client trust us then? Trust has to be the foundation for any consulting relationship, so to earn the trust of our clients we provide them with an escape hatch - our contracts are open ended. Just as Test Double benefits from the freedom to pursue unexpected changes while building a software solution, clients should have the benefit of responding to unexpected changes in their business. If a consultant is not performing, if budgets change, if a client goes on a hiring spree, or for some other reason they realize they no longer need our services, the client should be able to respond in a manner that works best for them.
We simply ask to have just a week’s advance notice before removing any of our team members from the engagement in order to tie up any loose ends, transfer knowledge to the client team, and ensure that any efforts already underway are ready for their team to run with. The client gets the comfort that if they feel they aren’t ever receiving value commensurate with their spend, they can terminate the relationship. We get a positive level of pressure to always deliver value to the engagement which motivates our team to continue delighting our clients until the point where we’re no longer required. Most often this is used by clients when budgets change, but it is available to all of our clients should they need it for any reason.
Our approach to contracting may not be normal, but we started this business because we felt normal software consultancies were often doing a disservice to the clients that they served. We are comfortable with the ambiguity of our contract length and have found that our clients are generally not concerned with the hours our team works.
We believe that our model has proven to be successful, as evidenced by the vast majority of revenue coming from existing or past clients via extensions, expansion, or referrals. A mutual level of trust needs to be there for any consulting relationship to work but when it is there, both parties find that it is an equitable arrangement that also embraces the constant of change within software projects.
]]>However, don’t feel bad if you’ve skipped testing your logging; I haven’t done a survey or anything, but you’re certainly not alone — after all it’s not a “user-facing feature”… I mean, unless you care about the experience of the people maintaining the software, which is to say your own experience, so… maybe…. Anyway, regardless of that, I found it to be a surprisingly non-trivial task in a recent feature I implemented because of our logging library, winston. So I’m here to help.
Now don’t get me wrong, winston is a great logger for Node.js apps. It’s highly configurable, is feature-rich and makes it easy to configure a default logging format while adding extra metadata depending on the context of the module doing the logging, using “child” loggers. However, it can be tedious and/or challenging to test file output in JavaScript, particularly a winston logger where the module under test uses logger.child
to add metadata that you want to test — like this one:
const defaultLogger = require('../lib/logger') // winston.createLogger(...)
const logger = defaultLogger.child({ label: 'job:processTheThings' })
export default function processTheThings() {
// some lets and stuff
logger.info('starting process')
// some important stuff
logger.info('finished process', {success: true})
}
How do we test this? Ideally, in my test, I want to be able to just say something like this to test the output, including any expected metadata (using testdouble.js, of course, but I’ll add the jest version in comments for those who prefer it):
const logger = td.replace('../lib/logger') // or jest.createMockFromModule(...)
const subject = require('./processTheThings.js')
subject()
td.verify(logger.info('finished process',{ // or expect(logger.info).toHaveBeenCalledWith(...)
label: 'job:processTheThings',
success: true
}))
This test would fail though, because my module doesn’t call logger.info
, it calls logger.child
. I could mock that as well, but I don’t want my test to be coupled to the module’s implementation by expecting logger.child
to be called specifically (that would make for a brittle test).
“Why not just test the output?” you may ask (at least that’s what I asked myself). But this logger happens to be writing to a file, so I can’t just mock global.console
and see what it says, and have you ever tried mocking/testing file output in Node.js without your test becoming hopelessly complex or coupled to the implementation? (See oil painting above.)
What I really want is a mockLogger
that I can spy on without worrying about how many times child
is called, if at all. I want to write the test in a family-tree-agnostic way, so to speak (i.e. just verify the job is getting done, regardless of how many children are involved, if any.2)
mockLogger
)Since our mocking libraries already do a pretty good job of creating mocks that look like real things (i.e. have mocked versions of all the methods and properties of the real thing), let’s just start with that:
const mockLogger = td.replace('../lib/logger')
// or ... = jest.createMockFromModule('../lib/logger')
The only catch is, we need to change the behavior of the child
method so that it propagates messages sent to any child logger up to the parent for testing. We can address that with something like this:
// Takes a mocked logger, and replaces the child method to simplify testing
function mockChildLoggers(logger) {
logger.child = childArgs => buildChildLogger(logger, childArgs)
return logger
}
const mockLogger = mockChildLoggers(td.replace('../lib/logger'))
But what does buildChildLogger
look like? Well, ideally I want something that will just call .info
on the parent logger, with the extra metadata attached. That will make it very easy to test the output, using only the original mocked logger. But we need to do this for every log level, not just .info
— and oooh winston. Good old winston lets you create as many custom log levels as you want! So rather than hard code them (which would couple our mock module to our winston config) let’s use winston’s .levels
property to get a list of methods to replace. (You might think this wouldn’t work seeing as how it’s a mocked instance, but td.replace()
and jest.createMockFromModule
both copy primitive property values to the mocked instance, so .levels
will return whatever levels your winston config defines)
// mockChildLoggers.js
export default function mockChildLoggers(logger) {
logger.child = childArgs => buildChildLogger(logger, childArgs)
return logger
}
function buildChildLogger(logger = {}, childArgs){
const childLogger = {}
// For each logging method:
for (const logLevel in logger.levels) {
childLogger[logLevel] = (message, args) => {
// Just call the same method on the parent, adding the extra metadata from childArgs
logger[logLevel](message, {...childArgs, ...args})
}
}
// And just in case someone decides to call `.child` on the child logger...
return mockChildLoggers(childLogger)
}
Voila! This creates a mock of a winston logger, with which you can verify any calls to info
or error
(or any other method) on the primary mocked logger, even if the module delegates to child loggers to do the actual logging.
So now, in my test, I can simply mock my logger import with this function and expect any required logging to be called directly on the mocked logger, with all the required metadata, regardless of whether it’s passed in directly or through one or more child loggers. That is, I can define the requirement in the test, but the implementation can be changed without breaking the test. 🙌
Check out processTheThings.test.js
now:
import mockChildLoggers from '@/test/mockChildLoggers'
describe('processTheThings', function() {
let logger, processTheThings
beforeEach(() => {
logger = mockChildLoggers(td.replace('../lib/logger'))
// ...or mockChildLoggers(jest.createMockFromModule(...))
processTheThings = require('./processTheThings').default
})
it('logs sucessful completion', async () => {
processTheThings()
td.verify(logger.info('finished process',{
// or expect(logger.info).toHaveBeenCalledWith(...)
label: 'job:processTheThings',
success: true
}))
})
})
Pro: This test is not coupled to the implementation of the module (you can use .child
, you can pass in metadata directly, you can do both)
Pro: This test is not coupled to the winston constructor in lib/logger
(you can change the log transport and format, and use any method to initialize the logger, all without rewriting your tests, as long as the export is a winston logger)
Con: This test IS coupled to the implementation of the logger itself (winston). If the module starts using something else for logging, we’ll have to rewrite the test and/or mockLogger
. But to be fair, the logging is what we’re testing, and every test has to be coupled to something it can deterministically test… (unless your test framework is an LLM I suppose…. 🤔)
Let’s step back from this particular problem for a moment to see if there’s anything more general we can learn.
The root cause of the problem here was a combination of 3 characteristics of the thing we wanted to test:
If any of these three things weren’t the case, we would have had a simple path for testing.
The most solvable of these is #3. We just needed to created a canonical testing interface, against which we could test requirements were met, regardless of which part of the flexible API was used to meet them. Given these circumstances, you might choose to write a fully independent mock of the module being tested. In our case that would look similar to buildChildLogger
, but it would also be storing the log results somewhere, and have methods for checking/verifying which logs were written (it would almost look like a fake, since it’s using similar logic to maintain an internal state parallel the real thing, but I would only call it a fake if the resulting state were used by the application, and in this case it’s only used by the test framework).
Fortunately, testdouble.js and jest are both pretty good at mocking things, and verifying mocked things, so we didn’t have to maintain any state! All we had to do was account for the flexibility of the API. No mocking library could know that .child(...).info(...)
counts as a call to .info(...)
. So we still solved #3, just in a simpler way: in our case, the only flexible part of the API was .child()
, so it was simple to delegate that to a similar call on the parent to create a canonical testing interface.
How reusable is this concept? Who knows… I think it depends on how easy it is to identify “a canonical testing interface” despite a module having a flexible API. I don’t know of a name for the pattern that this style of mocking follows. It’s probably just “mocks.” 😏 Mocking libraries have just gotten so good that I rarely end up having to do any manual mocking outside of complex cases. This was an interesting case for thinking about how to write a mock that can decouple tests from implementations, where the potential for coupling is the mocked module’s flexible API.
1 When I say “commissioned”, technically I mean “prompted”… but listen, if my blogging gets popular enough, I would LOVE to pay an artist to oil paint some blog headers. ↩︎
2 This ‘regardless of how many children are involved’ strategy is generally Not Recommended for use outside of the software industry. (Ahem, the chocolate industry) (Dinna fash: there is hope!) ↩︎
When we started Test Double in 2011, we had no idea what we were doing. In many ways we started the business because we believed our prior employers were doing it wrong. In some cases, they certainly were, but looking back we had a fair share of arrogance to think that we knew better than most when neither Justin nor I had ever run a business before.
Our arrogance turned out to be a benefit though. It meant that we didn’t feel a need to be like other businesses. We had the freedom to figure out the best solutions for us, regardless of the norms. This led us to establishing Test Double as a place where we talked openly about the finances of the company every month. A place where we avoided excessive office space, kegerators, and ping pong tables in exchange for higher profits that we shared with our employees. A place where we encouraged anyone who was closest to the problem to figure out the right solution. Where we stockpiled cash in order to have sufficient runway and avoid any short-term decisions that weren’t in the best interests of the business.
This ultimately served us well, but any time I talked to someone else who was running a business, I felt like an idiot or an outsider. We were noticeably different and as a first-time CEO I often felt impostor syndrome acutely. Surely, the founder and CEO on his 3rd business who just got a Series E funding round must know better than I do? Well, maybe they did, but maybe they didn’t. Fast forward 8 years and that CEO has just sold off all of the assets of his business, yet Test Double is still growing and delighting clients. Maybe operating differently wasn’t an issue, but an advantage.
Like-minded CEOs
Fast forward to April of 2020 and Justin and I had just completed a decision that was arguably the most abnormal in the company’s history. We converted the business to a 100% owned ESOP (employee stock ownership plan) as we felt that we should be sharing the equity and growth of the business with all employees who were creating that value.
Shortly after that conversion, I became aware of a group called Tugboat Institute. Tugboat is a membership network of CEOs across industries who choose to lead private, purpose-driven companies without the intention of an exit.
There were a lot of ESOP companies within Tugboat, so I was intrigued as we were one of the few ESOPs I had ever heard of. After a long call with their membership team, I was convinced we should be a part of this group and we joined. Within Tugboat, I had finally found a group of business leaders who thought similarly to the way I did.
Business done differently
It’s hard to describe the feeling of fit that hit me when I went to my first Tugboat summit. Everyone I talked to there led organizations differently. No one was focused on exits, shrinking the balance sheets, or striving to provide the appearance of value so that they could get their next funding round. Within this group, business leaders were focused on a purpose that benefits others. They understood if they do this well, profits will follow, but focusing solely on growing profits (especially for the benefit of a few people) can often lead to the opposite effect.
Further, if their purpose is significant (like improving how the world builds software), the business must focus on growing sustainably for decades. Tugboat names companies with this mindset Evergreen and believes that for them to be truly exceptional, they must demonstrate alignment with the 7 Ps or values:
Tugboat has done a great job of distilling the common values shared by businesses that tend to outperform their peers over years and years of growth. The 7Ps are obviously not the only reason for a company’s success, but Tugboat companies tend to do very well, even in the challenging times, which lends some credence to aligning your business towards the 7Ps.
Certified Evergreen®
At Test Double, we had carved our own path with no knowledge of the 7Ps, but wound up in great alignment with these values and with other companies within Tugboat. Today, we’re really excited to announce that Test Double operates in a manner so consistent with the 7Ps that we’ve now been Certified Evergreen®!
From Tugboat’s website:
Companies that are awarded Certified Evergreen status have undergone an extensive, rigorous assessment with the intention of continual improvement and enduring excellence around values, practices, and people. Once certified, these Evergreen companies join a select peer group of similarly purpose-driven organizations that attract top talent, valuable partners, and loyal customers. The Certified Evergreen mark calls out the commitment to excellence and longevity that these businesses share.
The certification process validated a lot of our assumptions and also provided us with some tangible feedback from industry leaders on ways that we can continue to improve and evolve. This certification isn’t an end goal, merely some validation on our way to building a business that has an outsized impact on the world, creating value for our shareholders along the way.
Maybe being different isn’t such a bad thing.
]]>Right away, I will admit: CircleCI is not my favorite CI/CD tool. There has been an explosion of new-generation tooling that isn’t all hype in this product space which has presented improved ergonomics, functionality, and pricing for developers compared to CircleCI.
However, sometimes, the correct choice isn’t what we want but what we have.
Given the client’s longstanding familiarity with CircleCI as a platform and the task at hand, a monorepo orchestrated with CircleCI seemed a suitable choice for encouraging code sharing and enforcing a consistent set of practices across business units.
And so, dear reader, I have identified and navigated all the foot-guns and false-starts so that you may learn from my begrudging, grumbling hours spent accomplishing this task using my not-favorite CI/CD tool.
To begin, let’s imagine a repository with the following structure:
$ tree -a myproject
myproject
├── .python-version
├── __init__.py
├── common
│ ├── common
│ │ └── __init__.py
│ ├── poetry.lock
│ └── pyproject.toml
├── poetry.lock
├── poetry.toml
├── pyproject.toml
├── subproject_one
│ ├── Dockerfile
│ ├── poetry.lock
│ ├── pyproject.toml
│ └── subproject_one
│ └── __init__.py
└── subproject_two
├── Dockerfile
├── poetry.lock
├── pyproject.toml
└── subproject_two
└── __init__.py
Two projects (subproject_one
, subproject_two
) are independently deployable services, both of which consume a common
package of library-level code.
I wanted to orchestrate a CI/CD pipeline such that merging changes to our main
branch would automatically deploy to staging and that deploying changes to a prod
branch would automatically deploy to production.
Further, I had build and validation steps that were common to all three directories and build, validation, and deployment steps that were unique to each directory.
Nothing exotic here - for example, I might run a linter across all files and I might build and push an image for subproject_one
to a particular Docker repository that is different from subproject_two
.
My initial inclination was to create three configuration files. One for tasks that might be common across all projects, for example, running the tests across a project or validating that the code is properly formatted. And two others, each corresponding to our subprojects, where we could place logic specific to those projects.
I recognized that this was not the official recommendation but attempted this (for a time) anyway. Orbs (CircleCI’s word for packages) bundle functionality to 1) filter based on paths and 2) invoke a “continuation” of a pipeline in order to run another file. These can be stitched together to create separate files for each project, and I did this for a time. However, in retrospect, I would not recommend it, and I migrated away from this approach. It was finicky, error-prone, and difficult to maintain. You live and learn, right?
CircleCI’s dynamic configuration prescribes creating two files: a config.yml
, where we can author jobs common to all projects and invoke our project-based workflows, and a continuation_config.yml
, where we can author our project-based jobs and workflows.
You may be wondering: won’t that become a huge mess of a file? Particularly, if many subprojects are present in our monorepo, one file containing many mixed concerns would make most software engineers eager to refactor.
Well, you’re right.
It could become a huge mess of a file.
But! I have identified a few techniques we can use to keep it modular, DRY (don’t-repeat-yourself), and maintainable.
First, we need a directory for our CircleCI configuration files and some proprietary setup:
├── .circleci
│ ├── config.yml
│ └── continue_config.yml
Your config.yml
file must include a setup: true
block alongside some CircleCI-specific configuration.
From there, we can move on to the aforementioned techniques.
CircleCI’s path filtering orb provides functionality to continue a pipeline based on the paths of changed files.
The mapping
parameter allows us to pass variables to our continuation configuration for use in when
clauses of our workflow.
This provides a mechanism to trigger particular workflow branches.
In practice, this will look like:
version: 2.1
setup: true
orbs:
path-filtering: circleci/path-filtering@1.0.0
jobs:
validate-source-code:
steps:
...
workflows:
always-run:
jobs:
- validate-source-code
- path-filtering/filter:
name: check-updated-files
mapping: |
common/.* run-common-workflow true
subproject_one/.* run-subproject-one-workflow true
subproject_two/.* run-subproject-two-workflow true
base-revision: main
config-path: .circleci/continue_config.yml
...
parameters:
run-common-workflow:
type: boolean
default: false
run-subproject-one-workflow:
type: boolean
default: false
run-subproject-two-workflow:
type: boolean
default: false
...
workflows:
subproject-one:
when:
or:
- equal: [true, << pipeline.parameters.run-subproject-one-workflow >>]
- equal: [true, << pipeline.parameters.run-common-workflow >>]
jobs:
...
subproject-two:
when:
or:
- equal: [ true, << pipeline.parameters.run-subproject-two-workflow >> ]
- equal: [true, << pipeline.parameters.run-common-workflow >>]
jobs:
...
Notably, this provides the flexibility to run all workflows when a change occurs in common
and only run a particular workflow when changes occur in its subdirectory.
YAML isn’t a programming language, but it is a declarative configuration language with not-often explored advanced features.
Some of my favorite features to use are anchors, aliases, and merge keys.
Combined, they allow us to author re-usable snippets in our CircleCI template (and most yaml
documents in general):
common_settings: &common_settings
executor:
name: python/default
tag: 3.10.8
subproject_one_common_settings: &subproject_one_common_settings
working_directory: ~/myproject/subproject_one
<<: *common_settings
...
jobs:
subproject-one-validate:
<<: *subproject_one_common_settings
steps:
- myproject-checkout
- install-acme-cli
- validate
So, if you have repeated snippets of orchestration (and you likely do, given you’re working in a monorepo), creating a common block of configuration, anchoring it, and then using that anchoring via aliases and merge keys allow us to write it once and run it everywhere, DRYing up your configuration file.
I am more familiar with the GitHub Actions style workflow triggers to invoke particular workflows based on branch conditions. CircleCI offers similar functionality via filters.
For our example project, I wanted to create three different workflows based on branching:
main
, and has no git tag, deploy it to a staging environment.prod
and has a tag of the form v$.$.$
(such asv1.0.0
), deploy it to the production environment.In practice, this looks like:
stg-filters: &stg-filters
filters:
branches:
only: main
tags:
ignore: /.*/
prod-filters: &prod-filters
filters:
branches:
only: prod
tags:
only: /^v.*/
...
workflows:
subproject-one:
jobs:
- subproject-one-validate
- subproject-one-deploy-stg:
requires:
- subproject-one-validate
<<: *stg-filters
- subproject-one-deploy-prod:
requires:
- subproject-one-validate
<<: *prod-filters
Combined with the aforementioned anchoring, aliasing, and merge keys, we can compose a common set of branch-based rules to use in our workflows for each subproject included in our monorepo.
If you’re struggling to fit a complicated step into your job or workflow declarations, offload that logic into a script. This can be authored with bash, or even your favorite programming language, for example:
#!/usr/bin/env python
import os
NAME = os.environ["NAME"]
print(f"Hello, {NAME}!")
jobs:
- run:
name: Invoke your complicated logic
command: ~/myproject/.circleci/my_script.sh
environment:
NAME: Bob
For my purposes, this was helpful to orchestrate a sequence of steps that required the usage of an API client given my deployment target did not have a CircleCI orb available. I know I would rather debug a python script than a hobbling of bash in a CI configuration file when it (inevitably) breaks.
This sounds naive, but consulting the official documentation for a CircleCI configuration file proved to be the best source of information while exploring the tools available. Further, it informed me of what options were available to me and provided brief examples for their implementation.
Googling for answers tended to lead to outdated community answers. And using ChatGPT for CircleCI was often flat-out wrong. So, in this instance, doing things the old-fashioned way paid the most dividends.
If you’ve made it to the end of this post, you’ve either (hopefully) added new tools to your toolbox or (unfortunately) continued to search for answers. Feel free to reach out to mavrick.laakso@testdouble.com in either case with feedback, praise, or condemnation (maybe you really like CircleCI - no judgement!) Until next time.
]]>It started with a chance encounter at AltConf in 2014 with co-founder Justin Searls. Cue the spark for a quest for quality.
From coding to consulting, Jamie’s journey to software consulting is less about the lines of code and more about connecting dots in ways most of us don’t see.
Here are 5 of his favorite thought technologies that helped him grow from software developer to experienced software consultant and team builder.
Here’s how Jamie explains the difference between a software developer and a software consultant:
A developer thinks in requirements, builds the thing that was asked of them and delivers that well. A software consultant does those things, too, working shoulder-to-shoulder with developers to build the thing — while also making recommendations along the way to improve how the process works.
Jamie compares it to juggling.
“The first time I ever juggled was with little handkerchiefs in elementary school. The rhythm was easy because it was slower,” he said.
If you focus on any one thing when you’re juggling, you’re going to lose track of the other things in the air. The key is to zoom out, to see the task in front of you but also the broader context.
“That’s how I think of consulting now: being able to do something complex, but with a soft enough gaze to see the whole operation so the performance doesn’t collapse.”
Have you had any confusion, tension or conflict at work recently? Maybe someone was upset they weren’t included on a call? It’s usually a lack of clarity on roles and responsibilities.
“After I experienced that first-hand, it set me on this journey of improving in BICEPS, a framework for understanding the most salient priorities of what people value at work,” Jamie said.
BICEPS represents Belonging, Improvement, Choice, Equality, Predictability, Significance.
“Those are the six core needs researchers find are important for humans, and each one of us has our own hierarchy of these needs. Usually people resonate most with one or two,” Jamie said.
He uses BICEPS as a tool for accelerated mutual understanding — figuring out what’s important to others, but also for self reflection and communciating what he values, too.
Jamie, a voracious reader who loves human philosophy as much as software coding, has another mental framework he loves: The Johari Window.
The Johari Window has helped him improve self-awareness and self-reflection, and it’s especially useful when pairing with others.
“It can help you uncover blind spots or even skills you didn’t know you had, but others can see in you,” he said.
As Jamie grew from software developer to software consultant, he began what he calls “sort of an obsession” with predictability.
One of the most influential books he’s read as a consultant is The Checklist Manifesto: How to Get Things Right. The Checklist Manifesto helped him develop a systematic and repeatable approach to consulting engagements.
“It’s written by a surgeon. One of the most striking things that he talks about is that his hospital has 42 operating rooms and more than 1,000 nurses, technicians, residents, physicians, and other staff,” Jamie explained. “So when he steps into the operating room, chances are high that he’s meeting someone for the first time before moving to the operating table just minutes later. One of the ways they team up seamlessly is through the use of checklists and pause points."
He applies that approach of checklists and pause points to his consulting assignments. It helps him drive results quickly, even when working on a team he’s never worked with before.
“How do we create a sense of belonging and a team identity as quickly as possible, so our team is not trying to be formed at the same time we’re trying to be effective?” he said.
Another of his favorite books for consulants is Thinking in Systems. Along with The Checklist Manifesto, these two books were critical to helping him develop a systematic and repeatable approach to consulting engagements.
“Thinking in Systems specifically looks at problem solving on scales ranging from personal to global. When one of my consulting engagements grew to 11 total consultants, it gave me an opportunity to put both these books into practice — and it was one of the most satisfying and rewarding experiences of my career,” he said.
Having a repeatable system can be a source of familiarity and stability in situations that are new. It also allows you to replicate successful work, without the risk of stuff falling through the cracks or information becoming too out-of-date.
“Think of systems this way: You don’t need to reinvent the wheel all the time,” Jamie said. “If you have a go-to recipe at home for cooking dinner, you can kind of throw that together, and you don’t really think about it too much. If you’re constantly cooking something brand new from a cookbook, you’re going to constantly be looking at the recipe, and it’s more cognitive demand.”
Read more from Jamie in his popular blog post: Seven C’s of Consulting Change.
]]>In the latest episode of the Data Mesh Radio podcast, I discuss tips for practical application of a Data as a Product mindset with industry experts Martina Ivaničová and Xavier Gumara Rigol.
This lively panel discussion dives into the nuances of Data as a Product to provide tons of great insights from panelists’ real-world experience. Some of the topics include:
Mindset vs. Outcome
What is data as a product? What about data products and data assets? Why does it matter?
Why Adopt a Data as a Product Mindset?
What are the benefits of adopting a DaaP mindset? How can you get folks to think differently about data? Maybe most importantly, how do you articulate what’s in it for them?
Pitfalls of the “Data Service Trap”
Tired of constantly reacting to data requests? Learn how a DaaP approach keeps users and goals front and center, transitioning to a more proactive and empowered model.
Overcoming Cognitive Load
The panel acknowledges and discusses how to address the cognitive load for those outside of the “data bubble” to lead to greater alignment and understanding.
The Path to Implementation
Implementing a DaaP mindset goes beyond technology; people and process are critical aspects as well. Communication and organizational change are essential. Sometimes securing buy-in is a challenge - we talk through re-thinking traditional ROI and building momentum when starting small with initial wins.
To learn more about how to maximize the value of your data, including tips on how to get started, check out Data Mesh Radio episode #291, “Panel: Data as a Product in Practice.”
Stream the audio here, or read the episode transcript here.
Check out more DaaP resources recommended by the panel: