Posts in "programming"

Legacy Software

After about 7 months exclusively working on a product team I’ve started delving back into a bit of agency work with clients. It’s a stark difference moving from a code base with an up to date version of Rails, the latest TS/React best practices, etc. to just trying to get docker compose up to run on a Rails 5 project but it’s also a lot of fun.

As frustrating as working in ancient code bases can be, and I get why a lot of programmers hate it, solving these kinds of problems especially within the constraints of a tight budget can be a lot of fun. Greenfield projects are basically writing code, and writing a lot of it, legacy projects help you flex your Docker muscles, read release notes, and calculate end of life scenarios for Ubuntu versions!

Git Worktrees

I’m sure git worktrees have their place, perhaps in large compiled projects, but in Ruby and TS I find them to be more a foot gun than not. Not being able to run specs or the server always becomes a hindrance. I’m often tempted to git worktree add when reviewing a PR or doing a quick bugfix on one of my branches but inevitably I’ll get to a point where I want to make sure it works and more often or not at that point I’ll have forgotten I’m even on a worktree. Just last week I spent about 15 minutes trying to debug a TS error before realizing that I was on a worktree and, therefore, the paths were confusing the TS compiler.

I’d love to take full advantage of worktrees but based on the above experiences I can’t see the benefit over something like gwip && gs some_other_branch. Which is pretty snappy. (If your not familiar gwip and gs are just aliases for git add -A; git rm $(git ls-files --deleted) 2> /dev/null; git commit --no-verify --no-gpg-sign --message "--wip-- [skip ci]" and git switch from the OhMyZsh git plugin).

Ecto, iLike you

In Elixir macros are… discouraged, from the docs:

Macros should only be used as a last resort. Remember that explicit is better than implicit. Clear code is better than concise code.

I get it; working in Ruby where every codebase seems to be playing a game of Schrödinger’s Macro it’s refreshing to work in an ecosystem where the code in your editor is what it is. As such I’ve always tried to embrace minimalism in Elixir. Yet Elixir has macros and there are some really good “last resorts” as mentioned above. I’ve encountered one such case a few times when working in Ecto; out of the box Ecto has at least 99% of anything I could ever want in a database wrapper. Over the years there have been an odd handful of cases where I’ve wanted to extend the functionality of Ecto in one way or another. I’m going to provide a few examples of this in action.

Keep in mind both of these examples, out of context may feel a bit contrived and in neither case is the macro reducing the lines of code. However; if placed in the Repo module these macros would make convenient reusable Ecto functions which could be called throughout the codebase.

Combining iLikes

A few years back I was working on some search functionality for a client. Their products were all made to order for specific customers. Allowing customers to search their order history came with several different query options including the name (first, last, email) of the person who placed the order, those same options of the salesperson who placed the order in their behalf or various attributes about the product. This led to a whole chain of joins and ilikes:

   val ->
        query
        |> join(:left, [order: o], u in assoc(o, :user), as: :user)
        |> join(:left, [order: o], s in assoc(o, :salesperson), as: :sp)
        |> join(:left, [user: u], uc in assoc(u, :user_credentials), as: :uc)
        |> join(:left, [salesperson: sp], sc in assoc(sp, :user_credentials), as: :sc)
        |> join(:left, [order: o], oli in assoc(o, :order_line_items), as: :oli)
        |> join(:left, [oli: oli], prod in assoc(oli, :product_item), as: :prod)
        |> join(:left, [prod: prod], config in assoc(pi, :configuration), as: :config)
        |> join(:left, [config: config], pt in assoc(config, :product_type), as: :pt)
        |> search_function(val)
        |> group_by([order: o], o.id)
 	end 
 end 
 
   defp search_function(query, value) do
    str = "%#{value}%"

    query
    |> where([order: o, uc: uc, sc: sc, conf: conf, pt: pt],
    	ilike(o.po_number, ^str) or
        ilike(uc.email, ^str) or
        ilike(uc.firstname, ^str) or
        ilike(uc.lastname, ^str) or
        ilike(sc.email, ^str) or
        ilike(sc.firstname, ^str) or
        ilike(sc.lastname, ^str) or
        ilike(pt.name, ^str) or
        ilike(pt.design_id, ^str)
        )
  end

It’s readable enough, especially the joins; I’d argue that Ecto’s elegant syntax actually makes this slightly more readable than a standard SQL statement but search_function is a bit much; to the point where Credo started lighting up cyclomatic complexity warnings

There was a better way. Maybe not for all cases; frankly if I hadn’t been warned about the complexity I would have called it day here. I thought it would be fun to condense this and pipe all joins into a smaller search_function somehow with fewer ilikes. This is where one can make good use of macros and Ecto:


  defp search_function(query, value) do
    str = "%#{value}%"

    query
    |> where(
      [order: o, uc: uc, ic: ic, bd: bd, obi: obi],
      multiple_ilike([:email, :firstname, :lastname], uc, str) or
        multiple_ilike([:email, :firstname, :lastname], ic, str) or
        multiple_ilike([:name, :design_id], bd, str) or
        ilike(o.po_number, ^str)
    )
  end
  
  
  defmacro multiple_ilike(keys, schema, value) do
    Macro.expand(ilike_irr(keys, schema, value), __CALLER__)
  end

  defp ilike_irr([key | keys], schema, value) do
    quote do
      ilike(field(unquote(schema), unquote(key)), ^unquote(value)) or
        unquote(ilike_irr(keys, schema, value))
    end
  end
  
  defp ilike_irr([key, key2], schema, value) do
    quote do
      ilike(field(unquote(schema), unquote(key)), ^unquote(value)) or
        ilike(field(unquote(schema), unquote(key2)), ^unquote(value))
    end
  end

Working from the top this takes our lines of code from 9 to 4 still making just as many ilike calls. I would have employed multiple_ilike/3 for orders as well if we were searching more than one column.

It’s fairly standard recursion in Elixir, made only a little more frightening with the quoting and unquoting of macro code and passed in runtime values.

To illustrate lets call it: multiple_ilike([:email, :firstname, :lastname], user_credentials, "%trav%") . The recursive call to ilike_irr/3 needs at least two columns (although one could handle a single column for a safer API). Each iteration uses Ecto’s ilike#2 function it takes your list of columns (keys) the table (schema) and the search string. We unquote these values because they are not part of the macro ie we want them to be whatever we are passing in. The first iteration above is going to add to the query: ilike(field(user_credentials, :email), "%trav%")) fairly straightforward (if you aren’t familiar with Ecto, field/2 is a way of dynamically accessing a column which we need because we the Macro won’t know the schema/keys being passed in at compile time). This initial ilike/2 is appended with an or/2 or in regular SQL “or” and the macro is called again. ilike(field(user_credentials, :firstname), "%trav%") which makes up the right hand side of the or we continue in this fashion until there are only 2 keys left at which point we return both ilike queries having a fully formed statement with multiple ilike ... or ilike ... statements chained together.

I love stuff like this; Ecto already feels like magic (not because it’s obfuscating anything just because of how smooth things are) and this lets me add a few ingredients of my own to the potion.

Copy git hashes

I’ve been reaching more and more for git history commands to get details about the file I’m working on. I used to use tools like GitHub desktop or Sublime Merge but I never felt like they added that much value, it’s just faster to to call up a git log somefile or git log -L 5,10:somefile. The only shortcoming of this approach is it generally leaves me wanting a commit hash in my clipboard (often to switch to or to run git diff with). No more! Today I doubled down grabbing these hashes without having to mouse over and select the hash; I give you: git diff myfile --pretty=oneline | head -c7 | pbcopy This is the most simple form of this that I can find.

--pretty=oneline ensures the commit hash is first, piped into head -c7 we get the first 7 characters of the hash (you could grab more or use some kind of regex to get the whole thing but I believe 7 is the minimal amount you can give git where it will reliably find a commit). Pipe it to pbcopy and you got a little git hash.
It’s a fair amount of typing, I think I could set --pretty=oneline in my git config and frankly I could likely alias this whole thing as some kind of function in my .zshconfig but for now it is what it is.

Weekly Round Up: June 13, 2025 👻

It was a week of state machines. Two separate Rails projects, two separate state machine libraries (state_machines and aasm), both sending emails. One is a fairly straightforward project for a department of education, it’s an old codebase but Objective built it and has been working on it ever since. As such, it’s fairly clean and straightforward for it’s age. I think that the more contractors and firms a codebase passes through the more muddled it gets. I’ve been working on this codebase for about two years now. The entire time I’ve been working to convert an old paper process to a digital one, it’s not an overly ambitious project but the budgeting has necessitated a slower pace of development. With only a few months left in the yearly budget (in education I guess the fiscal year ends with the school year) I was asked to quickly implement a form that allows users to draft a custom email message and attach a PDF. It’s been a while since I’ve done this with Rails, my last experience doing so was in the Paperclip days and that was not too fun. I’ve been pleasantly surprised with ActiveStorage, it’s much more plug-and-play then I recall (I’ve also been developing a lot longer now).

The other project is far more involved, my new full-time at gig at Built. It’s been exciting to work in tandem with another developer who has been handling the front-end work. Coming from a small agency I’ve always developed features full stack. Part of why I wanted to switch to a dedicated product team was to have experiences like this one where a greater degree of planning and coordination between developers was required. I started by creating a model last week and writing as many tests as I thought would be relevant. I’ve been through TDD phases in the past; but I think in small teams and projects TDD offers diminishing returns. It makes a lot of sense in a scenario like this, even on a fairly small team, since I’m developing features that I can’t be able to test in the browser until the other developer has her features in place. She in turn won’t be able to know if the front end works until my code is merged into her branch. This feature was the bulk of my week but it came together in time for some Friday afternoon QA of which I’m sure there will be several things to fix on Monday morning.

Multi-tenancy with Phoenix and Elixir

There are lots of good places to start with multi-tenancy in Elixir (although I’d recommend Ecto’s own docs for either foreign keys or postgres schemas ). Most of the write-ups and tutorials start the same way “generate a new Phoenix application with mix phx.new “. While, this is great if your starting an enterprise SASS app from scratch but it leaves something to be desired if you, like I was, are migrating an existing codebase with thousands of users, and products to a multi tenant application. I recently went through this with an enterprise client and there were enough pitfalls and interesting problems to solve that it seemed to warrant a detailed post.

I believe the solution I put together is both effective and elegant but it is not without it’s pain points. Mainly, if you are going to use PostgreSQL schemas (which I did) you are going to have to migrate your existing data into said prefixes. There is no easy way around this, it’s just a slog you have to do; more on that later.

Schemas?

I went back and forth for a while, I finally settled on query prefixes as they felt a little more elegant; segmenting data without having to add new foreign keys to any columns. It also makes it easy to migrate or wipe customer data if needed. Admittedly, if your managing tens of thousands of tenants in a single database this approach will be a bottleneck. In my case that was not a concern; there are two current tenants and the client only expects to add a few tenants ever year if that. As mentioned, Ecto has great docs on setting up schemas; however I opted to use a dependency called Triplex mostly for the sake of time (about a week in I realized I could have rewritten most the required features in a day or two but we had about a month to make this transition so a refactor at this point seemed like overkill). Schemas work because we are using PostgreSQL, you can kind of hack together “schemas” with MySQL but under the veil it’s just separate databases, I can’t vouch for that approach because my Elixir projects are mostly in Postgres.

The first big hurdle is ensuring that your queries are run in the the right schema. By default Ecto is going to run queries in the public schema. On any given query you can change this by passing in a prefix: option, ie: Repo.one(query, prefix: "some_prefix"). Now rewriting hundreds or thousands of Repo actions with a variable prefix is not exactly convenient but it’s imperative to ensure queries are scoped to the correct schema. Just imagine the catastrophic breach if you had Customer A getting back Customer B’s data!

Thankfully you do not have to rewrite all your queries explicitly calling a prefix. There are some handy built-in behaviours from Ecto.Repo. Enter Repo hooks! Ecto.Repo comes with some great behaviours that allow one to effectively write Repo.one(query, prefix: "some_prefix") without actually writing it for every single query! You can implement prepare_query/3 which to filter and modify the prefix. You add these hooks to YourApp.Repo This is prepare_query/3 in it’s simplest form:

@impl true 
def prepare_query(_operation, query, opts) do 
	opts = Keyword.put(opts, :prefix, "some_prefix")
	{query, opts}
end

Now all queries will be looking at the some_prefix prefix rather than the public prefix. In our app we had a few tables that we wanted scoped to the public query? For example you may have an admins table, or possibly oban_jobs , tenants , etc. You can handle this in a few ways:

@impl true 
def prepare_query(_operation, query, opts) do 
	if opts[:skip_prefix] do 
		{query, opts}
	else 
		opts = Keyword.put(opts, :prefix, "some_prefix")
		{query, opts}
	end 
end

This works although it necessitates passing skip_prefix: true to all your Repo calls; likely fewer then before but still kind of defeating the purpose of prepare_query/3 .

@sources ~w[admins oban_jobs oban_peers customer_pricing]

@impl true 
def prepare_query(_operation, %Ecto.Query{from: %{source: {source, _}}} = query, opts) when source in @sources do 
	{query, opts}
end 

def prepare_query(_operation, query, opts) do 
... 
end

By pattern matching on your allowed tables you can bypass your prefix override. I used a combination of both of the above approaches with a list of allowed source tables as well as the option to skip_prefix which adds an manual override to the API. In theory you shouldn’t need it but you never know, tests, edge cases, shrugs…

Tenant Selection

At this point we’ve converted every query in the application to use a dynamic prefix in about 10 lines of code. Not bad but it’s also not dynamic, I’ve hard coded some_prefix into my queries. Before we make the actual hook dynamic we need to determine how Phoenix is going to recognize the tenant. There are many ways of doing this, in my case, for now, we are using subdomains.

Since the subdomain is available on the conn.host, I set up a plug to fetch the subdomain:

defmodule MyApp.TenantPlug 
...

def selct_organization_from_domain(conn, _opts) do 
	subdomain =  get_subdomain(conn) 
	put_session(conn, :tenant, subdomain)
end

defp get_subdomain(%{host: host}) do 
	[subdomain | _] = String.split(host, ".")
	subdomain
end

This gets the subdomain and puts it in the session (which is not strictly necessary but is nice to have). Next lets pass it to Repo; as with the queries, one need not rewrite all Repo calls passing in a :subdomain option, here Elixir/Phoenix has your back. In Phoenix, each browser session is a unique process and that process can pass data to itself. Back in Repo I added these little helpers:

@tenant_key {__MODULE__, :tenant}

def put_tenant_subdomain(subdomain) do 
	Process.put(@tennat_key, subdomain)
end	

def get_tenant_subdomain do 
	Process.get(@tenant_key)
end

Now back in the TennatPlug we can add the subdomain to the process:

def selct_organization_from_domain(conn, _opts) do 
	subdomain =  get_subdomain(conn)
	Repo.put_tenant_subdomain(subdomain) 
	put_session(conn, :tenant, subdomain)
end

A second Repo behaviour can be used to pass options to the Repo call: default_options/1 . Rather than explicitly writing opts = Keyword.put(opts, :prefix, "some_prefix") in the prepare_query/3 hook default_options/1 will set up your opts before the Repo function runs. From there we call get_tenant_subdomain/0 to retrieve the subdomain/query prefix we set in the plug:

@impl true 
def default_options(_operation) do 
	[prefix: get_tenant_subdomain()]
end 

@tenant_key {__MODULE__, :tenant_subdomain}
def get_tenant_subdomain, do: Process.get(@tenant_key)

Like prepare_query/3 , default_options/1 will run with every query.

With this implemented, navigating to a specific subdomain will set the tenant in the current process (as well as in the session) and any database queries in that session will be scoped to the tenant’s schema. Putting it all together we have something like this in repo.ex


@allowed_sources ~w[oban_jobs tenants]

  @impl true
  def default_options(_operation) do
    [prefix: get_tenant_subdomain.get()]
  end

  @impl true
  def prepare_query(_operation, %Ecto.Query{from: %{source: {source, _}}} = query, opts)
      when source in @allowed_sources do
    opts = Keyword.put(opts, :prefix, "public")
    {query, opts}
  end

  def prepare_query(_operation, query, opts) do 
  	if opts[:skip_prefix] do 
		{query, opts}
	else 
		opts = Keyword.put(opts, :prefix, "some_prefix")
		{query, opts}
	end 
  end 

  @tenant_key {__MODULE__, :tenant}

  def put_tenant_subdomain(subdomain) do 
	   Process.put(@tennat_key, subdomain)
  end	

  def get_tenant_subdomain do 
	   Process.get(@tenant_key)
  end

The simplified version of my tenant_selection_plug.ex looks like:

  def selct_organization_from_domain(conn, _opts) do 
	   subdomain =  get_subdomain(conn)
	   Repo.put_tenant_subdomain(subdomain) 
	   put_session(conn, :tenant, subdomain)
  end

  defp get_subdomain(%{host: host}) do 
   	[subdomain | _] = String.split(host, ".")
	  subdomain
  end
end

In production we are handling a lot more such as authorization with Guardian but this show how simple it is to get a subdomain and add it to the session. The above is a fairly bare-bones approach our final project had a lot more customization and ended up being organized a bit differently; for example, we extracted functions dealing with getting and setting @tenant_keys in the process to their own module. My hope is that the above lays the groundwork for anyone looking to do something similar.

Data Migration

I wish I had a solution half as slick as Ecto’s behaviours make querying database schemas. I was unable to find an elegant way to migrate relevant data to specific schemas so I was forced to do it with good old SQL.

-- compy customers
INSERT INTO salt_lake.locations SELECT * FROM public.locations WHERE id = 'salt_lake_location_id';

-- copy customers 
INSERT INTO salt_lake.customers SELECT * FROM public.customers WHERE location_id = 'salt_lake_location_id';

I had about 50 queries similar to this. Fortunately, tenants were mapped to locations and at the time of the migration the client only had two tenants (the system was migrating from a product business to a consulting business). I ran these queries twice replacing salt_lake with bakersfield on the second iteration. In my case due to the way the system was originally designed to work with an external system (look’en at you Quickbooks) and some changes the customer was making to how that system would be used this migration ended up being a bit more harry than expected. I had to write several ad-hoc queries that looked less like the above and more like:

INSERT INTO salt_lake.qb_orders SELECT qb.* FROM qb_orders qb JOIN orders o ON o.qb_order_id = qb.id JOIN customers c on o.customer_id = c.id WHERE NOT EXISTS (SELECT 1 FROM salt_lake.qb_orders slcqb WHERE slcqb.id = qb.id) AND c.name ILIKE '%A Problematic Customer%'

Again, that’s not the fault of the multi-tenancy setup, migrating data in any complex system is always going to have it’s prickly bits. If anyone has ideas for a more elegant migration pattern (first two queries, ignore the last one that an unfortunate specific), I’m all ears, shoot me an email self[at]travisfantina.com.

Today I Learned ~D[2025-05-14]

I recently switched jobs, which means new BitBucket credentials. However; I remain an occasional consultant with my last agency so I need to keep my public key associated with their BitBucket account…

The first thing I learned today

BitBucket won’t let you use the same public key for multiple accounts. I find this a little odd; like how AWS won’t let you name a S3 bucket if the name already exists. It feels like a website telling you “hey somebody is using this password lets try something else!” I know RSA key pairs are more secure and unique than passwords but still 🤷

Making multiple pushes to git easy

You can adjust your ~/.ssh/config to easily push to separate git accounts with different keys:

# Assume this is your defaut
Host *
    UseKeychain yes 

# Key 2
Host altkey
    HostName bitbucket.org
    IdentityFile ~/.ssh/alt-key
    # you likely don't need this but it's nice to specify 
    User git 

Then add/update your remote origin:

git remote add origin  git@altkey:bitbucket_account/repo.git

Instead of bitbucket.org:account you’re just subbing in the Host alias. From there SSH doesn’t care because it’s been pointed to an IdentityFile it may not be the system default but it works.

The git problems begin

git push and:

fatal: Could not read from remote repository.

Please make sure you have the correct access rights

Ok fairly common lets go through the checklist:

  1. The key is in BitBucket
  2. BitBucket access is “write”
  3. Check origin (see above)
  4. Check permissions on the public key And that’s about where my expertise ended.

Diving in

It’s useful to learn a bit of debugging, you can get pretty verbose with git logging by adding the environment variableGIT_SSH_COMMAND="ssh -vvv Pretty cool, and I was able to confirm a few differences between pushes to a working repo and the broken one. I was also able to give this log to an LLM and bounce a few ideas off it but ultimtally I don’t feel like these logs gave me a lot of valuable info. git config --list likewise is a handy flag to use but it didn’t show me any glaring issues. So I started looking into the SSH config: ssh-add -l which lists the RSA keys you have configured. To be sure I did ssh-add -D which removes your keys and then explicitly added both keys back with ssh-add ~/.ssh/[key name] Then I ran ssh -T git@altkey this runs a test with the alias configured in the config file. Infuriatingly, this returned:

authenticated via ssh key.

You can use git to connect to Bitbucket. Shell access is disabled

So my config was correct, I had access, but I could not push. It took me an hour but eventually I set the key for git to use explicitly:

GIT_SSH_COMMAND="ssh -i ~/.ssh/alt-key -o IdentitiesOnly=yes" git clone git@altkey:bitbucket_account/repo.git

No further issues (with either repo).
It’s unlikelly I’ll remember specifically setting the GIT_SSH_COMMAND which is the main reason I’m writing this!

Class Configs with Lambdas in Ruby

I’ve been getting reacquainted with Ruby, diving into a well established project which has been blessed by numerous smart developers over the course of the past 10 years. I discovered an interesting pattern for gathering models (ApplicationRecord classes) that may or may not be eligible for some feature: You start with a mixin that creates a method for your classes to pass options to; as well as a method for determining if those options enable the feature or not:

module ProvidesFeature 
    class_methods do 
        # pass this to the model class
        def features_provided(model, **opts)
            (@features ||= []) << [model, opts]
        end

        # call this to initialize class feature checks
        def feature_models(ctxt)
            features_provided.map do |args|
                DynamicFeature.new(ctxt, args)
            end
        end
    end 
end 

Here is an example DynamicFeature class instantiated above. This could be a bit simpler if you didn’t want to pass any context in but a lot of the power of this approach comes from the flexibility an argument like context gives you:

class DyanmicFeature do 
    def initialize(ctxt, config_args)
        @ctxt = ctxt
        configure(config_args)  
    end

    def configure(ctxt, args = {})
        @should_provide_feature = args.fetch(:should_feature_be_provided) do 
            -> (ctxt) { ctxt&.fetch(:person_is_admin, false) }
        end
    end 

    def can_feature?
        @should_provide_feature.call(@ctxt)
    end
end 

Pausing for a moment and breaking this down. The #configure method is the main source of the magic. First we try to get the keyword :should_feature_be_provided (implemented below). If we get it we can return it’s value; however, there is built in flexibility to this. If args does not have a :should_feature_be_provided key then we can call a lambda with additional context. Again, you don’t need to pass anything else but I view this flexability as a strength if used strategically. Now implement; in an active record ie. Person

class Person < ApplicationRecord 
    include ProvidesFeature 

    features_provided :person, 
        should_feature_be_provided: -> (ctxt) { ctxt.person.is_admin? }
    

You can then easily gather any models that ProvidesFeature:

ApplicationRecord.subclasses.select { |klass| klass < ProvidesFeature }

Instantiate DynamicFeature on each class (note we are passing some context that assumes there is a person with an is_admin? method. It’s a little contrived but it illustrates the point: you can pass additional context in when the feature_models are built.

.flat_map { |klass| klass.feature_models(ctxt) }

Then filter with can_feature?

.select { |klass| klass.can_feature? }

At the start of this post I said this was an “interesting pattern”; not necessarily saying it’s a good one. I’m still fairly new to Ruby (despite having built a few production projects back in 2016 and 2018) and the OO pattern. Personally; I found the above extremely difficult to grok and even though I understand it I’ve found that, within the context of the project I’m working on, I’ve myself treadmilling through various files. In some ways I feel like, clever, as it is, this pattern may obfuscate a little too much but I’m open to feedback from those who have been in the OO world longer.

Weekly Roundup: May 2, 2025

This week I formally transitioned from my fulltime consulting gig at Objective for a fulltime gig at Built For Teams more details on that in a future post. However; broadly speaking it means that I’m dusting off my Ruby skills, diving deeper into the realm of OO programing then I ever have before.

Farewell ASDF

Last Friday night I pulled a Flutter repo I’m working on with a friend. I started having all kinds of issues trying to install Cocoapods. gem install cocoapods but then flutter run produced this error:

Warning: CocoaPods is installed but broken. Skipping pod install.
...
Error: CocoaPods not installed or not in valid state.

Ok. So do some more research throw in a sudo, no luck. pod version produces this error:

<internal:/Users/travis/.asdf/installs/ruby/3.3.5/lib/ruby/3.3.0/rubygems/core_ext/kernel_require.rb>:136:in `require': linked to incompatible /Users/travis/.asdf/installs/ruby/3.1.6/lib/libruby.3.1.dylib -

Ah! I’ve seen this more than once! Ever since I shifted to a Ruby focused team at the start of the year I feel like Ruby version management has been an uphill slog. I’ve reshim’d multiple times, removed versions of Ruby, removed the Ruby plugin, and reinstalled ASDF. Things work for a time but eventually I run into errors like the above. My hunch, which may be ovbious, is that something was wrong with my setup that was placing versions of Ruby inside other versions (ruby/3.3.5/lib/ruby/3.3.0); I’m not sure if the path is supposed to look like that but it doesn’t make sensee to me. I’m willing to take responsability here, it may be that my $PATH was misconfigured (although I attempted multiple times to proide a concise path for ASDF) or that something in my system was messing with ASDF. I love ASDF, it’s served me very well for years. Being able to remove rvm and nvm and seamlessly manage Elixir versions between projects was a breath of fresh air. The docs are clear and concise, the tool provides enough functionality to get stuff done without getting in the way. However; for whatever reason, the slog to get Ruby working just took its toll. One of my coworkers mentioned Mise which is a drop in replacement for ASDF. I installed it in about 30 seconds and in 45 seconds my project was running with Mise. 👏

Weekly Roundup: Apr 25, 2025

At the agency, we have a customer who has asked that customers accept terms of service before checking out. This is for an Elixir project; mostly fullstack Elixir however the frontend has an odd assortment of sprinkles: StimulusJS and React. I created a terms_and_conditions versions table and an accompanying view helper which will check a terms_version_accepted on the user record if the last terms_and_conditions.inserted_at date matches the terms_version_accepted then the user is shown an active “proceed to checkout” button, if not the button is disabled and a note asking them to acccept the terms of service will display.
Since most of the Elixir projects I work on are fullstack (Phoenix LiveView) I don’t often get to write API endpoints. The API work on this was admittidly very small, a simpl endpoint that takes the user’s ID and updates the terms_version_accepted timestamp when they click “accept” in the modal. It returns a URL which we then append to checkout link allowing the user to proceed. This feature is due May 5th but I’m hoping to get onto the staging server on Monday or Tuesday.

Internal Tooling:

I’ve been using fzf for a while but I’ve wanted to filter only unstaged files, ideally whenever I type git add I just want to see a list of unstaged files that I can add. Admittidly I got some help from AI to do write this up:

function git_add_unstaged() {
    local files
    files=$(git diff --name-only --diff-filter=ACMR | fzf --multi --preview 'git diff --color=always -- {}')
    if [[ -n "$files" ]]; then
        BUFFER="git add $files"
        CURSOR=$#BUFFER
    fi
}

function git_add_unstaged_widget() {
    if [[ $BUFFER == 'git add' ]] && [[ $CURSOR -eq $#BUFFER ]]; then 
        git_add_unstaged 
        zle redisplay
    else 
        zle self-insert
    fi
}

zle -N git_add_unstaged_widget 
bindkey ' ' git_add_unstaged_widget

I’m wondering if I’ll find the automatic git add to be jarring or have situations such as a merge conflict where this may not work. If so I can always fiddle with the bindkey but for right now I’m enjoying my new found git add speeds.