Posts in "programming"

Weekly Roundup: May 2, 2025

This week I formally transitioned from my fulltime consulting gig at Objective for a fulltime gig at Built For Teams more details on that in a future post. However; broadly speaking it means that I’m dusting off my Ruby skills, diving deeper into the realm of OO programing then I ever have before.

Farewell ASDF

Last Friday night I pulled a Flutter repo I’m working on with a friend. I started having all kinds of issues trying to install Cocoapods. gem install cocoapods but then flutter run produced this error:

Warning: CocoaPods is installed but broken. Skipping pod install.
...
Error: CocoaPods not installed or not in valid state.

Ok. So do some more research throw in a sudo, no luck. pod version produces this error:

<internal:/Users/travis/.asdf/installs/ruby/3.3.5/lib/ruby/3.3.0/rubygems/core_ext/kernel_require.rb>:136:in `require': linked to incompatible /Users/travis/.asdf/installs/ruby/3.1.6/lib/libruby.3.1.dylib -

Ah! I’ve seen this more than once! Ever since I shifted to a Ruby focused team at the start of the year I feel like Ruby version management has been an uphill slog. I’ve reshim’d multiple times, removed versions of Ruby, removed the Ruby plugin, and reinstalled ASDF. Things work for a time but eventually I run into errors like the above. My hunch, which may be ovbious, is that something was wrong with my setup that was placing versions of Ruby inside other versions (ruby/3.3.5/lib/ruby/3.3.0); I’m not sure if the path is supposed to look like that but it doesn’t make sensee to me. I’m willing to take responsability here, it may be that my $PATH was misconfigured (although I attempted multiple times to proide a concise path for ASDF) or that something in my system was messing with ASDF. I love ASDF, it’s served me very well for years. Being able to remove rvm and nvm and seamlessly manage Elixir versions between projects was a breath of fresh air. The docs are clear and concise, the tool provides enough functionality to get stuff done without getting in the way. However; for whatever reason, the slog to get Ruby working just took its toll. One of my coworkers mentioned Mise which is a drop in replacement for ASDF. I installed it in about 30 seconds and in 45 seconds my project was running with Mise. πŸ‘

Weekly Roundup: Apr 25, 2025

At the agency, we have a customer who has asked that customers accept terms of service before checking out. This is for an Elixir project; mostly fullstack Elixir however the frontend has an odd assortment of sprinkles: StimulusJS and React. I created a terms_and_conditions versions table and an accompanying view helper which will check a terms_version_accepted on the user record if the last terms_and_conditions.inserted_at date matches the terms_version_accepted then the user is shown an active “proceed to checkout” button, if not the button is disabled and a note asking them to acccept the terms of service will display.
Since most of the Elixir projects I work on are fullstack (Phoenix LiveView) I don’t often get to write API endpoints. The API work on this was admittidly very small, a simpl endpoint that takes the user’s ID and updates the terms_version_accepted timestamp when they click “accept” in the modal. It returns a URL which we then append to checkout link allowing the user to proceed. This feature is due May 5th but I’m hoping to get onto the staging server on Monday or Tuesday.

Internal Tooling:

I’ve been using fzf for a while but I’ve wanted to filter only unstaged files, ideally whenever I type git add I just want to see a list of unstaged files that I can add. Admittidly I got some help from AI to do write this up:

function git_add_unstaged() {
    local files
    files=$(git diff --name-only --diff-filter=ACMR | fzf --multi --preview 'git diff --color=always -- {}')
    if [[ -n "$files" ]]; then
        BUFFER="git add $files"
        CURSOR=$#BUFFER
    fi
}

function git_add_unstaged_widget() {
    if [[ $BUFFER == 'git add' ]] && [[ $CURSOR -eq $#BUFFER ]]; then 
        git_add_unstaged 
        zle redisplay
    else 
        zle self-insert
    fi
}

zle -N git_add_unstaged_widget 
bindkey ' ' git_add_unstaged_widget

I’m wondering if I’ll find the automatic git add to be jarring or have situations such as a merge conflict where this may not work. If so I can always fiddle with the bindkey but for right now I’m enjoying my new found git add speeds.

Weekly Roundup: Apr 18, 2025

Working for a small agency I am fortunate to work on a number of fast moving projects simultaneously. For years I’ve failed to document what I do during the week but I’m going a little recap of my week. One part historical record, one part general interest. I’m posting it on my blog in the off chance that somebody reads it and, facing a similar problem will reach out I’m always happy to discuss what worked for me and what didn’t work. It also doesn’t hurt to put this stuff into the world to show that yes I actually do work; I haven’t always had the most active GitHub but most of my client projects a private/propriety. I’m easing into this, all week I was looking forward to this post; now, however, I realize I should have been working on this not cramming it in from memory on a Friday night.

This week was a balance between my ongoing Elixir projects and a newer (to me) Ruby project.

  • For the past five years I’ve either supported, or been the lead dev on a large B2B ecommerce platform which handles a few million in daily sales. Over the winter the company began consolidating their North American and European processes which includes using said platform for sales in the EU. Although the hope is that the European process will align with the North American there are some relevant differences. For example in North America the client’s product is technically considered a “raw material” which means there is no “Value Added Tax” (VAT); however in Europe, depending on the country of origin and the destination VAT may be charged, other relevant changes are shipping across borders, truck loading calculations and different invoicing procedures. At this point we are still in the research and discovery phase but I’ve been working with another developer to scope this project out and write some preliminary tests as research.
  • For another client I’ve been moving from a Quickbooks Online integration to Quickbooks Desktop, this is a multi-tenancy Elixir Phoenix app so I’ll be keeping the Online functionality and just adding a connection to Quickbooks Desktop. The API docs for QBOnline are fairly good, this is not the case with QB Desktop, it’s evident that Intuit either has the platform on life support or intentionally obfuscates the functionality to foster a consulting industry around the product. QB Desktop uses an SOAP XML type endpoint. Having wrangled fairly nasty endpoints with SAP I wanted to, if at all possible, avoid dealing directly with QB Desktop. I discovered a service called Conductor that does the bulk of the heavy lifting and allows you to hit a very concise REST endpoint.
  • Since the beginning of the year I’ve been transitioning from primarily Elixir projects at the agency to a single Ruby based product. On that front I’ve been involved in an ongoing integration with BambooHR; partnering with Bamboo to pull employee data from their endpoint.
  • On a personal front I finished the migration of this blog from Ghost back to markdown files. I still love Ghost but managing my own instance and integrating it with my Garden proved to be more management than I wanted.

Personal Heuristic: Make it Readable

I wrote this post back in January, just dusted it off to post today as I attempt to get back on the blogging horse.


Today I was refactoring a small module that makes calls to an SAP endpoint. The compiler got hung up because it couldn’t find the value item. It was an easy fix, my code looked like this:

for itm <- data do
    %{"MATNR" => material, "PSTYV" => category, "VBELN" => so} = item
    %{material: material, category: category, so: so}
end

It’s easy to spot (especially if the compiler tells you exactly where it is); in the function head I wrote itm but down below I’m looking for item. Simple; yet this is not the first time something similar has happened to me. It’s also not the first time I’ve specifically confused itm with item which led me to this conclusion: just write item every time. There is an odd switch in my brain that thinks I’m penalized by the character, and leaving e out of item will somehow make my code more terse. While technically true, it’s not worth it. It never is; just write item, everytime. People know what item is. itm is more ambiguous, not just because it only saves one letter, but it could be an abbreviation or some weird naming convention. Why put that mental load on someone, even yourself, reading through this code? This is a tiny example but it’s magnified in function names. While check_preq may be quick to type and take up less horizontal space in an editor it’s not immediately clear what this function does. I would argue that get_purchase_requisition_number is a much better function name; even if you know nothing about the function, the codebase, or programming in general you can read that and know what’s supposed to happen. Of course there are conventions, ie. ! dangerous or ? bankbook method endings in Ruby ie. exitst? will throw an error. These sorts of things require one to be a little familiar with the patterns of a language but that’s ok that just means that I can write a function get_purchase_requisition_number! and anyone familiar with Ruby or Elixir will expect the function to raise or return an explicit value (as opposed to something wrapped in an :ok tuple).

Moving forward I’m calling things what they are even if it comes with a dash of verbosity.

Not to rush Christmas, but I think I’ll try my hand at Advent of Code this year. It will be a good chance to play around with Rust.

Adding a `soft_delete` to Ecto Multi pipelines

I’m a big fan of Ecto, Elixir’s database wrapper. The Multi library lets you build up a series of operations that happen in order, if one fails the entire operation rolls back. Multi comes with the a lot of standard CRUD built in, insert/4 , update/4 , delete/4 and their bulk counterparts insert_all/5 , update_all/5 and delete_all/5 for acting on multiple records.

I’ve been working on a project where we make use of the soft delete pattern, rather than calling delete/4 on a record we generally update/4 the record passing in a deleted_at timestamp:

|> Multi.update(:soft_delete, fn %{customer: customer} -> 
	Changeset.change(customer, %{deleted_at: now})
end)

This works fine, and even updating multiple records one could take this approach:

|> Multi.update_all(:soft_delete, fn %{customers: customers} ->
	ids = Enum.map(customers, & &1.id)
	from(c in Customer, where: c.id in ^ids, update: [set: [deleted_at: ^now]])
end, [])

I was working on a new feature that will require a cascade of soft deletes, deleting multiple records, their associated records, their children, etc. (As the second example above is doing). Admittedly, I could have just utilized this Multi.update_all/5 and put multiple steps into the multi . However; I thought continuously mapping specific ids, passing in set: [deleted_at: ^now] was a little cumbersome and not very idiomatic. Mostly, I wanted to have a bit of fun wondering: “what if Ecto.Multi had a soft_delete_all/5 function?” Of course it doesn’t, this is a niche use case but it makes sense in this application so I dug in and found the task to be (as is the case with a lot of Elixir) surprisingly easy.

Just like update_all/5 I wanted to make sure soft_delete_all would handle queries or functions passed in. Pattern matching here using the is_function/1 guard. This made it a fairly straightforward operation:

@spec soft_delete_all(Multi.t(), atom(), fun() | Query.t(), keyword()) :: Multi.t()
  def soft_delete_all(multi, name, func, opts \\ [])

  def soft_delete_all(multi, name, func, opts) when is_function(func) do
    Multi.run(
      multi,
      name,
      operation_fun({:soft_delete_all, func, [set: [deleted_at: Timex.now()]]}, opts)
    )
  end

  def soft_delete_all(multi, name, queryable, opts) do
    add_operation(multi, name, {:update_all, queryable, [set: [deleted_at: Timex.now()]], opts})
  end

The first function matches against functions while the second matches against a queryable. I’ll explain the distinction between both.

Under the hood Multi is already equipped to handle functions or queryables; by reading the source of the Multi module I was able to,matches, forward along the proper structure for the Multi to run, and in another case recreate the same functionality that Multi.update_all uses. Both operation_fun/2 and add_operation/3 are nearly copy-pasted from the Multi core.

In the first instance the multi is passed a function, something like:

|> soft_delete_all(:remove_customer, &remove_customer/1)

In this case Ecto adds a new Multi operation to the pipeline: Multi.run/3 but it needs to run the function it’s passed. It does this with operation_fun/2 . The multi has several matchers for each of the bulk operations, in my case I only needed one :soft_delete_all .

defp operation_fun({:soft_delete_all, fun, updates}, opts) do
    fn repo, changes ->
      {:ok, repo.update_all(fun.(changes), updates, opts)}
    end
  end

Again, this is identical (save the :soft_delete_all atom) to the Multi module. It runs our function which creates a query, it passes our update: [set: [deleted_at: Timex.now()]] to the query and then updates the record.

In cases where we pass a query in:

|> soft_delete_all(:remove_customer, Query.from(c in Customer, where: c.id == 123))

We match on the next function head, here again I used Ecto’s pattern writing my own custom add_operation/3

defp add_operation(%Multi{} = multi, name, operation) do
    %{operations: operations, names: names} = multi

    if MapSet.member?(names, name) do
      raise "#{Kernel.inspect(name)} is already a member of the Ecto.Multi: \n#{Kernel.inspect(multi)}"
    else
      %{multi | operations: [{name, operation} | operations], names: MapSet.put(names, name)}
    end
  end

This is going to first check that the operation name isn’t already in the Multi. If it’s not, we append the operation into the Multi. This works because of the parameters we’ve passed it:

add_operation(multi, name, {:update_all, queryable, [set: [deleted_at: Timex.now()]], opts})
  end

Specifically: {:update_all, queryable, [set: [deleted_at: Timex.now()]], opts} once again, we aren’t doing anything fancy to soft delete these records, we are using Multi’s ability to :update_all with our provided queryable. The update we are making is [set: [deleted_at: Timex.now()]] .

There you have it, it’s :update_all all the way down, which makes sense because we are updating a record instead of deleting it, but I think it’s a lot cleaner to write something like this:

query1 = from(c in Customer, where: c.last_purchase <= ^old_date)
query2 = from(u in User, join: c in assoc(u, :customer), on: c.last_purchase <= ^old_date)

Multi.new()
|> soft_delete_all(:customers, query1)
|> soft_delete_all(:users, query2)
#πŸ‘†don't judge this contrived example it's not production code

TIL Struct matching in Guards

Not so much a TIL but I always get confused with the proper syntax. You can pattern match on a struct and use it in a guard to only let through the structs you want:

@spec address_formater(BillAddress.t() | ShipAddress.t()) :: String.t()
def address_formatter(%struct{} = address) when struct in [BillAddress, ShipAddress] do
 ...
end 

def address_formatter(_), do: raise "AddressError :: Not my address!"

As with a lot of my examples it may be a little contrived but it is based on a real world but I fixed today where address_formatter/2 was getting an %Ecto.Association.NotLoaded{} and trying to format it.

TIL UUIDv4 vs UUIDv7

I’ve always run with UUID v4 because it’s the default for the Ecto.UUID library in Elixir. However a coworker recommended UUID v7. Having never really looked into UUID other than to implement as a primary key the distinction was news to me.

Effectively;

  • UUID v4 is a totally random hash that is generated and extremely unlikely to ever conflict with any other generated UUID.
  • UUID v7 also contains a random hash but is also based on a timestamp, this means you can sort them and index them.

For further reference, yes there are UUIDs v1-v8 as of this writing. If you want a good description of each you can check out this helpful link .

TIL INSERT INTO with SELECT constraints

In the past month I’ve had to write a lot of SQL to migrate a system and split existing “locations” into tenants ie. migrating data from a public schema to a tenant’s schema is gets messy due to foreign key constraints. Order of operations is important but sometimes you still find yourself in a corner.

In instances where I already have data in the tenant schema, for example customers and I need to load a subset of data from another table, eg. customer_addreses it’s possible to run the query with tenant.customers as a constraint for what your inserting:

INSERT INTO tenant.customer_addresses SELECT * FROM public.customer_addresses AS pc WHERE EXISTS (SELECT 1 FROM tenant.customers AS tc WHERE tc.id == pc.customer_id)

This will insert public.customer_addresses into tenant.customer_addresses for every teant.customer that already exists. I’ve gotten around a lot of tricky constraint issues with missing/incomplete data this way.

Today I Learned ~D[2024-01-03]

You can use Erlang’s tc function to see how many microseconds a function takes. For example, say you were curious if Enum.filter/2 or Kernel.--/2 took longer:

Example:

$iex> vals = [1, 2, 3, 4, 5]
$iex> :timer.tc(Enum, :filter, [vals, &rem(&1, 2) == 1])
{20, [1, 3, 5]}

$iex> :timer.tc(Kernel, :--, [vals, [2, 4]])
{3, [1, 3, 5]}

Kernel.-- or vals -- [2, 4] took 3 micro seconds while Enum.filter/2 (Enum.filter(vals, & &1rem(&1, 2) == 1)) took 20.

This is a fairly trivial example but I could see this coming in handy with larger operations. For more detailed analysis you can always use Benchee. Thanks to chriserin for helping me get the right Erlang syntax for tc