Posts in "programming"

Week in Review

This week I started getting back into some serious work for other clients. My schedule is kind of weird, I’m a consultant working 50-50 for two different companies (~20 hours each), so in a way I have two clients. However; one of the companies is an agency so I contract for them and then I contract for additional clients through them. I have two clients I bill but three or four clients I work for.

I’m happy to get back into agency work, it’s a lot of fun flitting from project to project. This is an overview of my week:

  • Working in a legacy Phoenix code base I spent a few hours doing PR review and then developing a new feature which will change the way a specific type of order gets processed. This project has a fairly lengthy state machine that orders will pass through (think, credit check, sales order, customer notification, shipping), it can get pretty complex depending on the customer’s location, configuration, and the product they ordered.
  • I’ve been working on some infrastructure updates in an old Rails project, we had fantastic junior do the back breaking work of migrating the app from Rails 5 to Rails 8. It’s working locally but there are some issues running it on the actual EC2 instance.
  • Working with TypeScript, GraphQL and Rails to develop a history modal to display PaperTrail versions. Most of the back end work was done by another developer to convert the actual Version record into a nice “log” I’m just trying to query them and display them with some filtering by date. This was part of a larger sprint this week and last which was all-hands on (about 8 developers).
  • Personally, I made a few improvements to my Go tool to fetch my current time for the month from Harvest. Further improvements would only be a waste of time but I’m having a lot of fun playing with Go.

Harvest Timers and Go!

This week I put together a tiny Go project. Go’s been on my “to learn” list for years now but I’ve never quite gotten around to it. Over the summer I got as far as reading a few articles and skimming the documentation but I didn’t have the time to make anything.

I’m a contractor working, primarily, for two clients. One is an agency that has their own Harvest account for tracking time against client projects. The other client is a traditional product company, I track time and bill them with my own, separate, Harvest account. It’s a bit of an annoyance because having two separate Harvest accounts means I have to sign in twice just to figure out how many hours I’ve worked in the month so far. I created a little CLI (the CLI part is not quite implemented yet) to query both accounts, grab my monthly hours, and total them.

Strapped for time I asked Opencode to generate a basic query to an endpoint and parse the returned JSON, this outline was enough for me to go the rest of the way implementing what I needed.

You can check it out here, but unless your in the exact same situation as me, it’s likely not going to do you much good!

codeberg.org/tfantina/…

Week in Review

I’ve been diving back into some Elixir projects this week; mostly small stuff. I updated Sentry and ensured it was logging at all the endpoints. Story time with this client…

Some six years ago I wired up Sentry to start tracking errors when this code base was fairly shiny and new; at the time I used free account associated with my work email. My thinking at the time was we could pilot Sentry then start a paid plan. I think the nature of this project, and Elixir in general is that it’s just fairly fault tolerant. Also, due to various priorities, budgeting, staffing, yadda, yadda I never took the time to dial Sentry in and filter the noise. Occasionally I’d dip into the account to look for a specific error, about half way through the month I’d get the email saying we’d hit our limit for the month and that was that.

I should note this is not a small client, this application is processing millions a day in revenue! On the one hand they should have been paying for Sentry years ago, but on the other hand I get it. Security is not slacking in this organization, their servers get more junk requests to stuff like /wp-admin then any other client I’ve ever worked with. This is, in no small part I believe, due to their rigorous use of bug bounty and associated white hat programs. (It’s a source of pride that this particular application has never been hacked and generally scores better than most of their tech in pen-tests). It’s interesting how a tool that is so essential to modern web-development like Sentry can be omitted for years and years; I’m confident we could have continued just fine without it but I’m also betting that if we take the time to filter the noise this is going to make the customer happier and make our lives far more simple.

Today I Learned ~D[2025-12-19]

When I work in Ruby, I really miss the pattern matching of Elixir. Today I discovered a few restructuring tricks for hashes that recreate some of that pattern matching goodness from Elixir. The TLDR is that you can use forward assignment.

options = {one: 1, two: 2, optional: false}
# then later 
options => {one:, two:, optional:}

$> one
1
$> optional
false

Note the => rightward operator, aka Ruby’s old friend the hashrocket, this is called forward assignment. You can make the above more robust with a rescue clause:

options => {one:, two:, optional:} rescue nil 

In the event that somebody passes in an options hash like {one: 1, two: 2} this will prevent things from blowing up. When destructuring Ruby will return nil for optional.

Rails: Monkey patching TimeZone logic

I’ve been working with a third party API to import some data, but only if one of the fields effective_date is today or in the past. So with

[ {
    data: {...},
    effective_date: now
  },
  {
    data: {...},
    effective_date: tomorrow
   }
]

the first record will be imported while the second record will be skipped. This is fairly easy to write a spec for, but for a project manager, or a tester who is manually doing experience testing it’s kind of hard to know if the record with an effective date tomorrow is going to import when tomorrow comes.

I’m sure there are libraries for this and strategies but I put together a little monkey patch for this scenario:

module ActiveSupport
	class TimeZone
		def now 
			Time.now + 1.day
		end 
	end 
end

I sent that over to the PM with instructions on where to put it (essentially, anywhere). Now, I realize this is not a good idea, and will likely mess up all kinds of stuff in Rails but for a one off test it worked perfectly and is still faster than signing into the service (which we may or may not have access to) and updating the effective_date.

Legacy Software

After about 7 months exclusively working on a product team I’ve started delving back into a bit of agency work with clients. It’s a stark difference moving from a code base with an up to date version of Rails, the latest TS/React best practices, etc. to just trying to get docker compose up to run on a Rails 5 project but it’s also a lot of fun.

As frustrating as working in ancient code bases can be, and I get why a lot of programmers hate it, solving these kinds of problems especially within the constraints of a tight budget can be a lot of fun. Greenfield projects are basically writing code, and writing a lot of it, legacy projects help you flex your Docker muscles, read release notes, and calculate end of life scenarios for Ubuntu versions!

Git Worktrees

I’m sure git worktrees have their place, perhaps in large compiled projects, but in Ruby and TS I find them to be more a foot gun than not. Not being able to run specs or the server always becomes a hindrance. I’m often tempted to git worktree add when reviewing a PR or doing a quick bugfix on one of my branches but inevitably I’ll get to a point where I want to make sure it works and more often or not at that point I’ll have forgotten I’m even on a worktree. Just last week I spent about 15 minutes trying to debug a TS error before realizing that I was on a worktree and, therefore, the paths were confusing the TS compiler.

I’d love to take full advantage of worktrees but based on the above experiences I can’t see the benefit over something like gwip && gs some_other_branch. Which is pretty snappy. (If your not familiar gwip and gs are just aliases for git add -A; git rm $(git ls-files --deleted) 2> /dev/null; git commit --no-verify --no-gpg-sign --message "--wip-- [skip ci]" and git switch from the OhMyZsh git plugin).

Ecto, iLike you

In Elixir macros are… discouraged, from the docs:

Macros should only be used as a last resort. Remember that explicit is better than implicit. Clear code is better than concise code.

I get it; working in Ruby where every codebase seems to be playing a game of Schrödinger’s Macro it’s refreshing to work in an ecosystem where the code in your editor is what it is. As such I’ve always tried to embrace minimalism in Elixir. Yet Elixir has macros and there are some really good “last resorts” as mentioned above. I’ve encountered one such case a few times when working in Ecto; out of the box Ecto has at least 99% of anything I could ever want in a database wrapper. Over the years there have been an odd handful of cases where I’ve wanted to extend the functionality of Ecto in one way or another. I’m going to provide a few examples of this in action.

Keep in mind both of these examples, out of context may feel a bit contrived and in neither case is the macro reducing the lines of code. However; if placed in the Repo module these macros would make convenient reusable Ecto functions which could be called throughout the codebase.

Combining iLikes

A few years back I was working on some search functionality for a client. Their products were all made to order for specific customers. Allowing customers to search their order history came with several different query options including the name (first, last, email) of the person who placed the order, those same options of the salesperson who placed the order in their behalf or various attributes about the product. This led to a whole chain of joins and ilikes:

   val ->
        query
        |> join(:left, [order: o], u in assoc(o, :user), as: :user)
        |> join(:left, [order: o], s in assoc(o, :salesperson), as: :sp)
        |> join(:left, [user: u], uc in assoc(u, :user_credentials), as: :uc)
        |> join(:left, [salesperson: sp], sc in assoc(sp, :user_credentials), as: :sc)
        |> join(:left, [order: o], oli in assoc(o, :order_line_items), as: :oli)
        |> join(:left, [oli: oli], prod in assoc(oli, :product_item), as: :prod)
        |> join(:left, [prod: prod], config in assoc(pi, :configuration), as: :config)
        |> join(:left, [config: config], pt in assoc(config, :product_type), as: :pt)
        |> search_function(val)
        |> group_by([order: o], o.id)
 	end 
 end 
 
   defp search_function(query, value) do
    str = "%#{value}%"

    query
    |> where([order: o, uc: uc, sc: sc, conf: conf, pt: pt],
    	ilike(o.po_number, ^str) or
        ilike(uc.email, ^str) or
        ilike(uc.firstname, ^str) or
        ilike(uc.lastname, ^str) or
        ilike(sc.email, ^str) or
        ilike(sc.firstname, ^str) or
        ilike(sc.lastname, ^str) or
        ilike(pt.name, ^str) or
        ilike(pt.design_id, ^str)
        )
  end

It’s readable enough, especially the joins; I’d argue that Ecto’s elegant syntax actually makes this slightly more readable than a standard SQL statement but search_function is a bit much; to the point where Credo started lighting up cyclomatic complexity warnings

There was a better way. Maybe not for all cases; frankly if I hadn’t been warned about the complexity I would have called it day here. I thought it would be fun to condense this and pipe all joins into a smaller search_function somehow with fewer ilikes. This is where one can make good use of macros and Ecto:


  defp search_function(query, value) do
    str = "%#{value}%"

    query
    |> where(
      [order: o, uc: uc, ic: ic, bd: bd, obi: obi],
      multiple_ilike([:email, :firstname, :lastname], uc, str) or
        multiple_ilike([:email, :firstname, :lastname], ic, str) or
        multiple_ilike([:name, :design_id], bd, str) or
        ilike(o.po_number, ^str)
    )
  end
  
  
  defmacro multiple_ilike(keys, schema, value) do
    Macro.expand(ilike_irr(keys, schema, value), __CALLER__)
  end

  defp ilike_irr([key | keys], schema, value) do
    quote do
      ilike(field(unquote(schema), unquote(key)), ^unquote(value)) or
        unquote(ilike_irr(keys, schema, value))
    end
  end
  
  defp ilike_irr([key, key2], schema, value) do
    quote do
      ilike(field(unquote(schema), unquote(key)), ^unquote(value)) or
        ilike(field(unquote(schema), unquote(key2)), ^unquote(value))
    end
  end

Working from the top this takes our lines of code from 9 to 4 still making just as many ilike calls. I would have employed multiple_ilike/3 for orders as well if we were searching more than one column.

It’s fairly standard recursion in Elixir, made only a little more frightening with the quoting and unquoting of macro code and passed in runtime values.

To illustrate lets call it: multiple_ilike([:email, :firstname, :lastname], user_credentials, "%trav%") . The recursive call to ilike_irr/3 needs at least two columns (although one could handle a single column for a safer API). Each iteration uses Ecto’s ilike#2 function it takes your list of columns (keys) the table (schema) and the search string. We unquote these values because they are not part of the macro ie we want them to be whatever we are passing in. The first iteration above is going to add to the query: ilike(field(user_credentials, :email), "%trav%")) fairly straightforward (if you aren’t familiar with Ecto, field/2 is a way of dynamically accessing a column which we need because we the Macro won’t know the schema/keys being passed in at compile time). This initial ilike/2 is appended with an or/2 or in regular SQL “or” and the macro is called again. ilike(field(user_credentials, :firstname), "%trav%") which makes up the right hand side of the or we continue in this fashion until there are only 2 keys left at which point we return both ilike queries having a fully formed statement with multiple ilike ... or ilike ... statements chained together.

I love stuff like this; Ecto already feels like magic (not because it’s obfuscating anything just because of how smooth things are) and this lets me add a few ingredients of my own to the potion.

Copy git hashes

I’ve been reaching more and more for git history commands to get details about the file I’m working on. I used to use tools like GitHub desktop or Sublime Merge but I never felt like they added that much value, it’s just faster to to call up a git log somefile or git log -L 5,10:somefile. The only shortcoming of this approach is it generally leaves me wanting a commit hash in my clipboard (often to switch to or to run git diff with). No more! Today I doubled down grabbing these hashes without having to mouse over and select the hash; I give you: git diff myfile --pretty=oneline | head -c7 | pbcopy This is the most simple form of this that I can find.

--pretty=oneline ensures the commit hash is first, piped into head -c7 we get the first 7 characters of the hash (you could grab more or use some kind of regex to get the whole thing but I believe 7 is the minimal amount you can give git where it will reliably find a commit). Pipe it to pbcopy and you got a little git hash.
It’s a fair amount of typing, I think I could set --pretty=oneline in my git config and frankly I could likely alias this whole thing as some kind of function in my .zshconfig but for now it is what it is.

Weekly Round Up: June 13, 2025 👻

It was a week of state machines. Two separate Rails projects, two separate state machine libraries (state_machines and aasm), both sending emails. One is a fairly straightforward project for a department of education, it’s an old codebase but Objective built it and has been working on it ever since. As such, it’s fairly clean and straightforward for it’s age. I think that the more contractors and firms a codebase passes through the more muddled it gets. I’ve been working on this codebase for about two years now. The entire time I’ve been working to convert an old paper process to a digital one, it’s not an overly ambitious project but the budgeting has necessitated a slower pace of development. With only a few months left in the yearly budget (in education I guess the fiscal year ends with the school year) I was asked to quickly implement a form that allows users to draft a custom email message and attach a PDF. It’s been a while since I’ve done this with Rails, my last experience doing so was in the Paperclip days and that was not too fun. I’ve been pleasantly surprised with ActiveStorage, it’s much more plug-and-play then I recall (I’ve also been developing a lot longer now).

The other project is far more involved, my new full-time at gig at Built. It’s been exciting to work in tandem with another developer who has been handling the front-end work. Coming from a small agency I’ve always developed features full stack. Part of why I wanted to switch to a dedicated product team was to have experiences like this one where a greater degree of planning and coordination between developers was required. I started by creating a model last week and writing as many tests as I thought would be relevant. I’ve been through TDD phases in the past; but I think in small teams and projects TDD offers diminishing returns. It makes a lot of sense in a scenario like this, even on a fairly small team, since I’m developing features that I can’t be able to test in the browser until the other developer has her features in place. She in turn won’t be able to know if the front end works until my code is merged into her branch. This feature was the bulk of my week but it came together in time for some Friday afternoon QA of which I’m sure there will be several things to fix on Monday morning.