[Only here for the code? Jump straight to the good part!]

Dependency management is one of the most important aspects of modern application engineering, but the nuance required to navigate its inherent trade-offs is something few people take the time to thoughtfully consider. So today, let’s take that time as we ship an example feature: implementing sign-in with Twitter as a third-party OAuth provider for a new Ruby on Rails app. This will give us an opportunity to explore the contours of an ever-present tension on software teams: should we solve this problem by relying on a dependency or by rolling our own implementation?

Before open source devoured the planet, businesses of all sizes routinely paid money to license software—for everything from libraries of utility functions to entire application servers. Because those licenses cost companies actual dollars, engineering organizations found themselves making “build vs. buy” decisions so frequently that many of them developed sophisticated and nuanced criteria to determine when to outsource a piece of functionality and when to own it in-house. Interestingly, this financial decision made many otherwise stodgy enterprise IT outfits surprisingly adept at managing the non-monetary risks of third-party dependencies. But that was then. Today, the best developer tools that money can buy cost zero dollars; as a result, relatively few programmers have had as much cause to grapple with the trade-offs posed by external dependencies.

Let’s start by rediscovering this debate and thinking through the consequences of aligning ourselves with either of its extremes.

If you insist on solving every little problem your application faces by coding it yourself—even when there are well-maintained, easy-to-use libraries that could do the work for you—the scope of your application’s direct responsibilities will broaden so dramatically that you might never have time to focus on whatever your application actually does. I’m reminded of a company I know that ostensibly sells publishing software, but spends most of its time maintaining a proprietary programming language they created for themselves in the 80s ("there weren’t any suitable object-oriented languages at the time!"). The fact we have memes like not invented here and reinventing the wheel is all you need to know that this is a bad way to go about things.

On the other hand, if you find yourself reaching for the search bar of RubyGems or npm at the first hint of programming discomfort, you might eventually find that your system has become a wobbly Jenga tower, stacked with the abandoned weekend projects of total strangers and liable to collapse the next time you patch a security vulnerability or upgrade your web framework. The combined scope of your application and its dependencies might be tens or even hundreds of times larger than it would have been if you’d written more of it yourself—and regardless of who wrote which lines, you’re still responsible for making sure all of it works. The fact we have memes like dependency hell and leftpad is all you need to know that this is a bad way to go about things.

Because both extremes lead to ruin, the only viable path forward is to think critically about your system’s essential and incidental responsibilities, to routinely analyze each dependency’s benefits and costs, and to communicate these subtleties clearly enough to arrive at a common approach with the rest of your team. But unfortunately, no matter where you learned to write software, it’s unlikely anybody taught you about this stuff. Like so many other facets of application development, we’re all forced to figure these things out on our own.

Interested in diving deeper into how open source dependencies could be impacting the maintainability of your team’s applications? You might enjoy this presentation on some of the lesser-known risks of open source exuberance.

The immediate concern we face when implementing any new feature is finding an answer to the question, “how am I going to make this work?” This is the headspace in which most application code is organized, written, and tested. Freshly shipped code is usually still dripping with forensic evidence that lays bare all of the anxieties its authors felt as they tried to get everything to work. You could try reading between the lines for clues as to how the authors anticipated the code changing in the future, but you probably wouldn’t find much. There’s just never time to worry about tomorrow’s needs when everyone is trying to hit today’s deadline.

Because a shipped feature is effectively proof that something can be made to work, the urgent focus of “how can I do this?” permeating a feature’s initial development drops from the greatest source of concern to a complete non-issue the instant it lands in production. Five years hence, nobody’s going to care that it saved somebody a day of effort to pull in a dependency if it means everyone since has had to wrangle a hard-to-use API that’s leaked throughout the rest of the codebase. And nobody’s going to find solace in code comments written under duress and littered throughout a 300-line function when they’re last on the team to yell “not it!” the next time its behavior needs to be changed.

When it comes to dependency management, the upshot of the outsized attention we pay to a feature’s initial implementation over its long-term maintenance is that it tips the scales in favor of the something-for-nothing allure of open source. Sure, developers might weigh the cost of writing something themselves against how much work it will be to install a library and interact with its API, but they’re not likely to scrutinize the downstream risks: is it still going to be actively maintained in five years? Will its performance characteristics impact the app negatively at higher scale? When new team members join, will they struggle to learn how to use it?

One question I always find useful when it’s not clear whether or not I should adopt a dependency is to ask, “when and why might this feature need to change in the future?

950 words ago, I promised to show you how to write your own OAuth sign-in implementation, and I promise I’m getting to it! But first, let’s take a moment to ask whether we even should.

In Ruby-land, OmniAuth is far and away the most popular way to add OAuth authentication to your application. OmniAuth is expressly not an OAuth library, however. It’s a highly-generic core that specifies a protocol for yet other gem authors to write authentication adapters ("strategies") against. The value of this approach is that once you’ve added OmniAuth and a single provider (like Twitter) to your app, the standardized adapter API means that adding support for a second or third provider (like GitHub or Apple) is relatively trivial. This is very cool!

The thing is, OmniAuth’s headline feature doesn’t do much for the app I’m building, because it will only ever need to authenticate with Twitter. But that alone doesn’t mean I couldn’t pull in the omniauth and omniauth-twitter gems and get the job done, anyway.

But then I paused to ask, “why might any of these moving parts change in the future?” Change isn’t inherently good or bad, but it always demands an attentive response: best case, it’ll be as easy as running bundle update for a gem; worst case, something will break and I’ll have to work around it in my application or fork a dependency to fix it.

  • Log in with Twitter implements a set-in-stone standard OAuth 1.0a API, hasn’t changed in a decade, and is in use by thousands of applications that consume its public API. The single most likely reason it’ll ever change is Twitter-dot-com being shut down. On the Mohs scale of hardness, I’d rate it a 9 out of 10

  • OmniAuth provides a custom API to end users, has been under active development for a decade, and has—if not thousands—at least several dozen gems that implement its adapter protocol. It respects SemVer for its end users and seems incredibly cautious about protocol changes for its provider adapters. 7 out of 10

  • The omniauth-twitter adapter adheres to OmniAuth’s 1.x API (but hasn’t been updated for 2.0 as of this writing), has been maintained by a single individual for nearly a decade, and has only received a handful of patches over the last few years. 6 out of 10

Try this for yourself! Ask “when and why will this change?” of the three most recent dependencies pulled into your app. You might be surprised. People love dunking on leftpad for being only 7 lines long, but that’s not a compelling argument through the lens of reasons it could change: it’s a pure function with zero dependencies, after all.

The good news: none of these things are changing so much that I should be worried about my work going up in smoke overnight if I choose to rely on all three of them. The bad news: the dependencies I’d be using to authenticate with Twitter appear more likely to change than Twitter itself—they’d be the weakest link between my app and one of the most important things it needs to do.

As it would happen, that chain began to show signs of stress just a week after I started work, when OmniAuth 2.0 was released in early 2021 and introduced a breaking change to address a long-standing CVE that exposed users to potential cross-site request forgery attacks. Their fix for Rails apps in particular isn’t super clear cut, depending on quacking like an internal Rails class that—being a private API—could change at any time. (In my analysis above, I hadn’t even considered that OmniAuth’s HTTP behavior was decoupled from Rails and that this could perversely give it even more reasons to break on me!) Of course, all of this turned out to be moot, because omniauth-twitter hasn’t been updated for OmniAuth 2.0 yet, so my choices were to use the older, less-secure 1.x version of OmniAuth or to fork omniauth-twitter and update it myself.

Suddenly, writing my own authentication code for Twitter didn’t seem so bad.

Now for the good part.

As a user, you’re probably used to seeing a logo party of Twitter, Facebook, Google, and Apple smeared across many popular apps’ login pages. You probably also have some basic familiarity with the user experience of authenticating via a third-party service (a “provider” in OAuth parlance). Let’s start by breaking down what’s happening behind-the-scenes when a user’s browser is shunted back-and-forth when logging into an app via Twitter:

  1. A user clicks a “Log in with Twitter” button, sending an HTTP request to your app (ideally a POST protected from CSRF attacks)

  2. While handling that request, your server sends its own HTTP request to Twitter’s API (a POST to /oauth/request_token) with an Authorization header containing your app’s OAuth consumer key, a random nonce, a timestamp, and the callback URL where Twitter should redirect users after they sign in. And finally, to top it all off, the authorization header must contain an HMAC-SHA1 digest of all of those parameters, signed with your app’s OAuth consumer secret. In return, Twitter will respond with an OAuth request token and secret. (If this feels like a lot, fear not! All this terminology is needlessly samey and bleeds together—seeing it in a code example might help)

  3. Remember, your server hasn’t responded to the user’s initial POST request yet! Once you’ve received an OAuth request token & secret from Twitter to facilitate the user’s login attempt, your server should persist them somewhere (usually in a session) and redirect the user to a special Twitter login page, with the token tacked on as a query param

  4. The next step is all between the user and Twitter: they login or don’t; they authorize your app or they don’t. If they do log in successfully and choose to authorize your app, then Twitter will redirect them back to your app at the callback URL you specified Step 2

  5. Just like Step 2, while handling the GET request at your callback URL, your server will send another POST request to Twitter’s API. This time, your server must ask Twitter for an “access” token (/oauth/access_token). This request is mercifully similar to the previous one, except its Authorization header must incorporate the user’s OAuth request token & secret and the body must contain a “verifier” string that Twitter appends to the callback URL as a query param. In response, Twitter finally gives you the goods: an OAuth access token & secret for use in making API requests (along with the user’s Twitter ID & handle)

  6. From Twitter’s perspective, your app is now authorized to begin making API requests on their behalf. But since the user is still waiting for the webpage at your callback URL to load, your server should probably first persist their authorization details and let them know they’re officially logged in

In case this is all clear as mud, don’t feel too bad; I’ve had to re-learn all this from scratch every time I’ve implemented an OAuth flow. For a second explanation of what’s going on, consider checking out Twitter’s own guide to logging in via OAuth.

Below, we’ll go through this process step-by-step with code samples taken from this minimal example Rails application I wrote to demo this functionality. If you have a Twitter developer account, you should be able to follow the directions in its README to see things in action for yourself.

The first step is relatively straightforward: render a link or button to kick the whole procedure off. Because the subsequent chain of redirects has meaningful side effects with security implications, it’s worth taking the time to prevent a cross-site script from initiating the login process without the user taking explicit action (this is exactly what the CVE logged against OmniAuth covered). That means you should avoid a simple <a> link that triggers a GET request and instead use a method that can leverage Rails’ anti-CSRF protection.

Working outside-in, I started with a route definition for a new resource called twitter_authorizations and defined it in config/routes.rb:

Rails.application.routes.draw do
  # …
  resources :twitter_authorizations
end

With this resource specified, Rails will generate path helper methods we can in turn pass to link_to or button_to. Because I wanted to render a normal-looking link instead of a button, I opted for link_to with its special method: post option (if you’re not familiar, link_to’s method option works in conjunction with Rails’ on-by-default unobtrusive JavaScript library and will intercept our link click and send a POST that’s protected by the same CSRF safeguards that apply to normal form submissions.

Putting it all together, I rendered an <a> link in this ERB template:

<%= link_to "Sign into Twitter",
  twitter_authorizations_path,
  method: :post %>

Of course, clicking this link will error until a create action is defined on a controller named TwitterAuthorizationsController. Because I strive to keep application logic out of my controllers, I immediately invoked a not-yet-existent method to take the next step for me by requesting an OAuth request token from Twitter in as you can see here (source):

class TwitterAuthorizationsController < ApplicationController
  def create
    request_token = TwitterOauth.request_token
    # …
  end
end

So the user has clicked “Log in”, we’re in the middle of handling a create action, and we’ve just invoked an as-yet-undefined request_token method with no arguments.

Time to invent whatever a TwitterOauth is going to be!

I can be a little cute about how I name classes (e.g. in a verb-target scheme, like NamesClass.new) and how I organize teeny-tiny units into trees of directories to ensure corresponding levels of abstraction, but today I’m just flattening things out into easily callable methods dangling off a module to better illustrate “how does this even work”. If you’re interested in how I code for “real”, you might enjoy this talk on how to program.

For easier extraction, clearer encapsulation, and simpler testing, I try to maximize the amount of code in my Rails apps outside of classes that extend from Rails’ types. An easy way to call this out conventionally is to start building out your logic in app/lib with plain old Ruby objects. So here’s how I started my TwitterOauth module with only some error handling elided for clarity (source):

require "net/http"

module TwitterOauth
  API_HOST = "api.twitter.com"
  RequestToken = Struct.new(
    :oauth_token, :oauth_token_secret, :login_url,
    keyword_init: true
  )

  def self.request_token
    Net::HTTP.start(API_HOST, 443, use_ssl: true) do |http|
      uri = URI("https://#{API_HOST}/oauth/request_token")
      req = Net::HTTP::Post.new(uri)

      res = http.request(req)
      body = Rack::Utils.parse_nested_query(res.body)
      RequestToken.new(
        oauth_token: body["oauth_token"],
        oauth_token_secret: body["oauth_token_secret"],
        login_url: "https://twitter.com/oauth/authenticate?oauth_token=#{body["oauth_token"]}"
      )
    end
  end
end

My first pass at writing this method was to go end-to-end with some kind of HTTP request in order to get some kind of feedback from Twitter’s server as quickly as possible. I typically reach for using Ruby’s provided Net::HTTP as opposed to other gems that layer slightly-less-clunky APIs on top of it, because it’s guaranteed to work forever and almost never changes (10 out of 10 hardness 💎). As you might be able to suss out above, Net::HTTP.start opens an HTTPS connection, then (inside a block that yields http and demarcates the connection) Net::HTTP::Post.new() instantiates a new POST and http.request() sends it, blocking until it gets a response.

I also knew from Twitter’s docs that the response body would come in the form of a query string (e.g. a=1&b=2), so I went out on a limb to pass the response body through Rack’s parse_nested_query helper to turn that into a hash for me. Finally, I like using custom values wherever possible as opposed to passing around hashes-of-hashes (both for easier debugging and so I can painlessly expand their utility by adding elucidative methods later), so I returned an instance of a Struct named RequestToken with everything the controller would need.

Everything look right so far? Well, that’s good—except it doesn’t work at all yet!

Above, I mentioned that there was a pretty convoluted Authorization header that Twitter requires be sent with each request. As a result, sending these POST requests as-is results in Twitter immediately failing with a 401 error. Because the authorization header seemed complicated, I wanted to get the other plumbing out before worrying about it. But now that we’d gone end-to-end, it was time to worry about the header (source):

module TwitterOauth
  def self.request_token
    Net::HTTP.start(API_HOST, 443, use_ssl: true) do |http|
      uri = URI("https://#{API_HOST}/oauth/request_token")
      req = Net::HTTP::Post.new(uri)
      req["Authorization"] = authorization_header(
        method: "POST",
        api_url: uri.to_s,
        auth_params: {
          oauth_callback_url: ENV["TWITTER_OAUTH_CALLBACK_URL"]
        }
      )

      res = http.request(req)
      # …
    end
  end
end

Because this method was already on the longer side, I immediately invoked an authorization_header method and passed it the bits I knew I’d need from reading Twitter’s docs: the request method & URL (these are both necessary ingredients of the signed HMAC-SHA1 digest), as well as the callback URL (which Twitter expects to be embedded in the header as opposed to being passed as a normal form data param).

Now, brace yourself, because constructing this header string the way that Twitter demands is easily the most complex and error-prone step in the entire procedure (there’s a reason why it was dropped from OAuth 2.0). Here’s a first pass, sadly requiring us to ingest its complexity all at once:

require "securerandom"
require "openssl"
require "base64"

module TwitterOauth
  # …
  def self.authorization_header(
    api_url:,
    method: "POST",
    auth_params: {},
  )
    auth_hash = auth_params.merge({
      oauth_consumer_key: ENV["TWITTER_OAUTH_CONSUMER_KEY"],
      oauth_nonce: SecureRandom.base64(32),
      oauth_signature_method: "HMAC-SHA1",
      oauth_timestamp: Time.new.to_i,
      oauth_version: "1.0"
    }).tap do |auth_hash|
      parameters = auth_hash.map { |k, v|
        [encode(k), encode(v)]
      }.sort.map { |k, v|
        "#{k}=#{v}"
      }.join("&")
      signature_base = method.upcase + "&" + encode(api_url) + "&" + encode(parameters)
      signing_key = encode(ENV["TWITTER_OAUTH_CONSUMER_SECRET"]) + "&")

      auth_hash[:oauth_signature] = sign(key: signing_key, data: signature_base)
    end

    "OAuth " + auth_hash.map { |k, v|
      "#{encode(k)}=\"#{encode(v)}\""
    }.join(", ")
  end

  def self.encode(s)
    URI.encode_www_form_component(s)
  end

  def self.sign(key:, data:)
    Base64.encode64(
      OpenSSL::HMAC.digest(OpenSSL::Digest.new("SHA1"), key, data)
    ).chomp
  end
end

If you squint at this method, the initial construction of auth_hash and its reduction into a comma-delimited string that is ultimately returned by the method is not terrifically complex, though some of the component parameters are a little mysterious. For example, oauth_version and oauth_signature_method must be set to precisely “1.0” and “HMAC-SHA1” respectively, in every request, even though they’ve never changed in a decade. The random oauth_nonce is only there so Twitter can filter out duplicate requests and oauth_timestamp so it can filter out old ones.

Now, where things really heat up is inside that tap block that mutates the hash by adding an oauth_signature to it.

A lot of people get confused by or frustrated by the tap method, because all it does is pass itself to a block and then return itself. We could just as easily set a variable auth_hash = {}, then set auth_hash[:oauth_signature]. What makes tap useful is its communicative power. Whenever you see tap, the author is expressing that they are knowingly violating the functional purity of their code. Because tap doesn’t do anything with the result of the block it invokes, its only possible utility is in creating side effects—like mutation (as in this case), adding a debugger, appending to a log, etc. So whenever I use tap, it’s as if I’m shouting to future readers, “watch out: side effect!

Because every aspect of this signed digest has to be construed exactly as Twitter expects for your API requests to be processed, let’s take our time and explain line-by-line what’s happening in that tap block.

First, the auth_hash needs to have each key and value URL-encoded in pairs:

parameters = auth_hash.map { |k, v|
  [encode(k), encode(v)]
}

Next, because order matters when encrypting a string, all the parameters need to be lexicographically sorted by encoded keys, hence this sort call immediately after encoding:

# …
}.sort

Next, we construct a query string with the each key-value pair interpolated by = and joined by &:

# …
.map { |k, v|
  "#{k}=#{v}"
}.join("&")

Now that we have our provided method, URL, and encoded parameter query string, we can join those together with & to create our the text that we’ll be signing:

signature_base = method.upcase + "&" + encode(api_url) + "&" + encode(parameters)

And the key we’re using to sign that string is supposed to be a URL-encoded query string of our app’s consumer secret key and the user’s OAuth token secret, but since we don’t know their secret yet, we just have to sign with our key plus a dangling &, because that’s what the instruction manual tells us to do:

signing_key = encode(ENV["TWITTER_OAUTH_CONSUMER_SECRET"]) + "&")

We finally have what we need to construct the signature that Twitter wants to see! Because this method was confusing enough, I deferred that responsibility to a separate sign method:

auth_hash[:oauth_signature] = sign(
  key: signing_key,
  data: signature_base
)

This uses Ruby’s built-in OpenSSL and Base64 modules to first create a signed HMAC digest, and then to encode it into a Base64 string:

def self.sign(key:, data:)
  Base64.encode64(
    OpenSSL::HMAC.digest(
      OpenSSL::Digest.new("SHA1"),
      key,
      data
    )
  ).chomp
end

And only after you pull all that off, Twitter rewards you with a couple randomly generated strings for your trouble.

Recall that all the action in Step 2 was between our server and Twitter’s API. The user is still out in the cold waiting for something to happen after clicking the link that fired off their initial POST request.

After we receive the OAuth token & secret from Twitter’s API, we have what we need to finish handling the response. So back to our controller action:

class TwitterAuthorizationsController < ApplicationController
  def create
    request_token = TwitterOauth.request_token
    session[:oauth_token] = request_token.oauth_token
    session[:oauth_token_secret] = request_token.oauth_token_secret

    redirect_to request_token.login_url
  end
end

Because Rails defaults to storing user session data in browser cookies, it might seem risky to be sending an OAuth token “secret” across the wire. Fear not, the entire session is encrypted and inaccessible to users themselves. Additionally, this token & secret are only good for this login flow, and will be replaced by a different access token & secret in Step 5.

After receiving the OAuth request token from Twitter, all we need to do to finish handling this request is to:

  1. Stash the token and secret in the user’s session

  2. Redirect the user to Twitter’s special OAuth login page, with the OAuth request token is appended as a query parameter

Our work here is done for now! We’ve passed the hot potato back to Twitter.

Getting to this point has been a bit of a handful, so you’ll be pleased to know that your application bears no responsibility while the user actually signs in. Twitter will use the request token you passed to render a form to first authenticate the user (if they’re not signed-in already) and then to authorize your app’s access to the user’s account (unless they’ve granted it already).

If the user fails to authenticate successfully or denies authorization for your app, you’ll never hear from them again. If the user is already logged in and has previously authorized your app, they’ll fly through the turnstile and be immediately redirected back to you.

Upon success, Twitter will redirect the user to the callback URL you specified in the authorization header of your request in Step 2. (For security’s sake, Twitter will only redirect users to callback URLs that have been enumerated in your app’s settings in their developer portal.)

In Step 2, your server asked Twitter for a temporary “Request” token with which to begin a coordinated sign-in attempt. At this point, the user has successfully signed in, authorized your app, and Twitter has redirected them back to your designated callback URL with a new query parameter named oauth_verifier. In this step, we’ll take all these ingredients and ask Twitter for a longer-lasting “Access” token with which to make API requests on the user’s behalf (e.g. composing new tweets). The request is similar enough that we can take advantage of the authorization header behavior we implemented in Step 2, expanding it only slightly.

But first, we need to make our app actually respond to our designated callback URL (/twitter_authorizations/callback in this case). That starts with config/routes.rb:

Rails.application.routes.draw do
  get "/twitter_authorizations/callback", to: "twitter_authorizations#callback"
  resources :twitter_authorizations

  # …
end

Remember, order matters when you’re specifying routes in Rails! This one-off GET action must come before the call to resources, or else the default show action will be handed the request for twitter_authorizations/callback (how can it know that callback isn’t a valid ID?). As a result, we have to place any exceptional routes above the general-purpose ones that match on wildcard path components.

Next, we’ll pop back over into TwitterAuthorizationsController and add the callback action as a new instance method:

class TwitterAuthorizationsController < ApplicationController
  def callback
    if session[:oauth_token] != params[:oauth_token]
      raise "OAuth Request Token mismatch"
    end

    access_token = TwitterOauth.access_token(
      oauth_token: session[:oauth_token],
      oauth_token_secret: session[:oauth_token_secret],
      oauth_verifier: params[:oauth_verifier]
    )
    # …
  end
end

And this time, we’ve just invoked a second module method named access_token, this one taking several arguments: the request token & secret, as well as the new oauth_verifier parameter that Twitter has provided to ensure we’re keeping the login requests straight.

Now we just need to make that method real in our OAuth module (error handling removed below for clarity):

module TwitterOauth
  API_HOST = "api.twitter.com"
  AccessToken = Struct.new(
    :oauth_token, :oauth_token_secret, :user_id, :screen_name,
    keyword_init: true
  )

  def self.access_token(
    oauth_token:,
    oauth_token_secret:,
    oauth_verifier:
  )
    Net::HTTP.start(API_HOST, 443, use_ssl: true) do |http|
      uri = URI("https://#{API_HOST}/oauth/access_token")
      req = Net::HTTP::Post.new(uri)
      params = {
        oauth_verifier: oauth_verifier
      }
      req["Authorization"] = authorization_header(
        method: "POST",
        api_url: uri.to_s,
        oauth_token: oauth_token,
        oauth_token_secret: oauth_token_secret,
        params: params
      )
      req.set_form_data(params)

      res = http.request(req)
      body = Rack::Utils.parse_nested_query(res.body)
      AccessToken.new(
        oauth_token: body["oauth_token"],
        oauth_token_secret: body["oauth_token_secret"],
        user_id: body["user_id"],
        screen_name: body["screen_name"]
      )
    end
  end
end

If you were to place the request_token and access_token methods side-by-side, the first major difference that would stand out is that the latter requires some form data be sent in the body of the request (the only parameter being this new oauth_verifier). Additionally, the authorization_header method has a few extra parameters, because the header must now include the OAuth request token, the signing key must include the OAuth request token secret, and the form data parameters must be included in the signed digest.

Ultimately, the method returns a second struct, AccessToken, which includes the account’s unique Twitter ID and their “screen name”, which is apparently what Twitter internally calls what are more popularly known as handles (e.g. “searls”).

The additions to our authorization_header method are mercifully minor (although taken at once, it’s now unwieldy to the point of approaching overwhelming):

def self.authorization_header(
  api_url:,
  method: "POST",
  auth_params: {},
  params: {},
  oauth_token: nil,
  oauth_token_secret: nil
)
  auth_hash = auth_params.merge({
    oauth_consumer_key: ENV["TWITTER_OAUTH_CONSUMER_KEY"],
    oauth_nonce: SecureRandom.base64(32),
    oauth_signature_method: "HMAC-SHA1",
    oauth_timestamp: Time.new.to_i,
    oauth_version: "1.0",
    oauth_token: oauth_token
  }).compact.tap do |auth_hash|
    parameters = params.merge(auth_hash).map { |k, v|
      [encode(k), encode(v)]
    }.sort.map { |k, v|
      "#{k}=#{v}"
    }.join("&")
    signature_base = method.upcase + "&" + encode(api_url) + "&" + encode(parameters)
    signing_key = encode(ENV["TWITTER_OAUTH_CONSUMER_SECRET"]) + "&" + encode(oauth_token_secret)

    auth_hash[:oauth_signature] = sign(key: signing_key, data: signature_base)
  end

  "OAuth " + auth_hash.map { |k, v|
    "#{encode(k)}=\"#{encode(v)}\""
  }.join(", ")
end

If you look closely, there are only four changes: a few additional parameters, the inclusion of the user’s oauth_token in the header, the merging of the form data params with the auth_hash, and the inclusion of the user’s oauth_token_secret in the key used to sign the HMAC digest.

Once Twitter responds to this request with an access token & secret for the user, their involvement in this procedure is complete and they’re ready to start receiving normal API requests from your app on the user’s behalf.

Our final task is to persist the user’s access token. Because we’ve been using Rails’ conventional resources scheme to determine the names of our routes, controller, and views, the ActiveRecord model we should create for this has a yet-untyped name determined by Rails conventions: TwitterAuthorization.

I specified the database table that would back that model with this migration:

class CreateUsersAndTwitterAuthorizations < ActiveRecord::Migration[6.1]
  def change
    # …
    create_table :twitter_authorizations do |t|
      t.references :user, foreign_key: {on_delete: :cascade}
      t.string :oauth_token, null: false
      t.string :oauth_token_secret, null: false
      t.string :twitter_user_id, null: false
      t.string :handle, null: false
      t.timestamps

      t.index [:user_id, :twitter_user_id], unique: true
    end
  end
end

Basically it’s just a bunch of strings, a unique index to ensure we don’t save multiple authorizations for the same user to the same Twitter account, and a foreign key constraint to delete any authorizations in the event that we delete a user.

Like most of my favorite ActiveRecord models, the class itself is quite small, and reads mostly like configuration:

class TwitterAuthorization < ApplicationRecord
  belongs_to :user
  has_many :toots
end

With that plumbing out of the way, we can finish our controller’s callback action (can you believe we’re still not done handling it?), by adding something like this:

class TwitterAuthorizationsController < ApplicationController
  def callback
    raise "OAuth Request Token mismatch" unless session[:oauth_token] == params[:oauth_token]

    access_token = TwitterOauth.access_token(
      oauth_token: session[:oauth_token],
      oauth_token_secret: session[:oauth_token_secret],
      oauth_verifier: params[:oauth_verifier]
    )
    twitter_authorization = TwitterAuthorization.find_or_initialize_by(
      user: @current_user,
      twitter_user_id: access_token.user_id
    ).tap do |twitter_authorization|
      twitter_authorization.assign_attributes(
        oauth_token: access_token.oauth_token,
        oauth_token_secret: access_token.oauth_token_secret,
        handle: access_token.screen_name
      )
    end

    if twitter_authorization.save
      flash[:info] = "Signed into Twitter as @#{twitter_authorization.handle}!"
    else
      flash[:error] = "Twitter sign-in failed! #{twitter_authorization.errors.full_messages.join(", ")}"
    end

    redirect_to twitter_authorizations_path
  end
end

This find_or_initialize_by that’s mutated via tap to a call to assign_attributes is so that we can respect our unique constraint as we upsert the authorization without immediately creating it. This allows us to branch on the conventional boolean-returning save call to set a success or failure message in the flash and, finally, redirect the user to a list of their authorizations.

We did it!

Was this a lot of work? Yes. Could you have pulled in a couple of libraries to do it for you? Also yes. Are you right to be worried that you’ve just saddled yourself with a maintenance nightmare? No, actually!

Going back to our discussion on evaluating dependencies by asking when and why they might change out from under us, it’s valuable to ask the same thing of our own code: when and why will this Twitter OAuth code change over the life of this app?

When we write code for our applications that we know will experience a lot of feature churn, or that affects many parts of the system directly, or that many people will need to inker with in the future, then we should start budgeting for the cost of incurring change. It’s in those times that I’ll throw any non-essential complexity overboard, will be extra precious about naming new abstractions, and will button everything up with immaculate tests. When change is likely, invest in the code’s changeability; when change is unlikely, don’t future-proof it.

But when we write code that specifies behavior that is not only unlikely to change, but (as in this case) is literally outside our power to change, I’m far less concerned about the carrying cost of the code’s flexibility. I benefit from experience in this case: I have written OAuth flows as far back as 2011—for apps still in use today—that haven’t needed to be touched since being initially created. When you’re operating in a context where necessary change is remotely unlikely, unnecessary change is actually a greater source of risk. And because libraries can be changed and updated for any reason and in perpetuity, the fact that they are so easily changeable makes external dependencies a liability in cases like this one.

I know this has been a journey, but thank you for joining me on it. It’s all-too-unusual for us to be able to take our time and think through the underlying tensions at play when we go about solving a problem, but the insights that we can glean from the experience can continue to inform our practice as developers for the rest of our careers.

If you found this post useful or if you have any comments, please let me know on, incidentally, Twitter! And if you’d like to be able to have more pairing sessions like this one—except on your own codebase and with another developer working alongside you in real-time, consider getting in touch with us at Test Double—it’s what we’re in business to do!

Justin Searls

Hash An icon of a hash sign Code Name
Agent 002
Location An icon of a map marker Location
Columbus, OH