Is learning Vim actually faster if you then spend multiple hours a week tweaking your Neovim config?

Finally took the plunge and decided to start paying for Micro.one… Very good chance I will upgrade to the full service soon.

Before sending that email to hundreds of thosuands of customers… ask yourself, is announcing “we now have dark mode” broadcast worthy?

The NYC Primary

It’s been a rough couple of weeks in world news. A lot has been going on that I’ve felt moved to comment on but haven’t had the heart to actually write it down. Zohran Mamdani’s victory in the NYC primary is a ray of sunshine in otherwise very dark times. It’s a powerful reminder that progressives can win even against massive entrenched interests. In the final weeks of the race billionaires and powerful centrist democrats such as Bill Clinton were pouring millions of dollars and coveted endorsements, respectively, into Cuomo’s campaign in what amounted to an attack on Mamdani. The attack failed. The voice of the people could not be silenced. Big as New York is, on the scale of everything else going on in the world this is kind of small potatoes, but a win for progressives anywhere is a victory for progressives everywhere. I’ll take it.

The Underground Railroad

I’ve been on a Colson Whitehead tear in the past year having started five of his books, finishing four of them. This year I raced through the Ray Carney series (Harlem Shuffle and Crook Manifesto) and I just finished The Underground Railroad. While not my favourite of his books The Underground Railroad was still a compelling read. Whitehead has this talent that I struggle to explain. He’s very good at writing historical fiction that makes you sad or angry at the history without feeling sad or angry with the story. That’s what buoys up books like The Underground Railroad; it was a fantastic read, I daresay a borderline fun read but it also served as a poignant reminder of the atrocities of chattel slavery to the point of being physically moved. This is undoubtedly a hard balance to strike but Whitehead has managed to do it in nearly every book I’ve written.

White Fragility

Full Title: White Fragility: Why It’s So Hard for White People to Talk About Racism Although it’s a short read this book was dense. That’s not to say it was a difficult read; quite the contrary, it was extremely approachable but every single page was so laden with facts each paragraph served as an essay unto itself. White Fragility asks left-leaning progressively minded folks to examine their own attitudes towards race; are we more concerned with being racist or being perceived as racist? Do we only think of racists as “very bad people” the kind who form lynch mobs or march with tiki torches? Or are we able to see how our own race has given us an unfair advantage? Are we able to see how we silently perpetuate racial disparities to suit our own needs? Do we do this in subtle subconscious ways or more overtly by proclaiming that we are “colour blind” and therefore race doesn’t matter?

Not only did White Fragility implicate me in my own racism, it also gave me e pause to reflect other areas in which I have blindspots. Benefiting from the various privileges I have, not just as a consequence of my race but also my gender, sexual identity, appearance etc. What things have I said or done over the years that uphold and reenforce the patriarchy? Am I excluding disabled people in my actions (a very salient question for somebody who designs and builds websites, I reckon this site is not fully WCAG compliant).

Definitely worth a read, likely a second in a few years.

Weekly Round Up: June 13, 2025 👻

It was a week of state machines. Two separate Rails projects, two separate state machine libraries (state_machines and aasm), both sending emails. One is a fairly straightforward project for a department of education, it’s an old codebase but Objective built it and has been working on it ever since. As such, it’s fairly clean and straightforward for it’s age. I think that the more contractors and firms a codebase passes through the more muddled it gets. I’ve been working on this codebase for about two years now. The entire time I’ve been working to convert an old paper process to a digital one, it’s not an overly ambitious project but the budgeting has necessitated a slower pace of development. With only a few months left in the yearly budget (in education I guess the fiscal year ends with the school year) I was asked to quickly implement a form that allows users to draft a custom email message and attach a PDF. It’s been a while since I’ve done this with Rails, my last experience doing so was in the Paperclip days and that was not too fun. I’ve been pleasantly surprised with ActiveStorage, it’s much more plug-and-play then I recall (I’ve also been developing a lot longer now).

The other project is far more involved, my new full-time at gig at Built. It’s been exciting to work in tandem with another developer who has been handling the front-end work. Coming from a small agency I’ve always developed features full stack. Part of why I wanted to switch to a dedicated product team was to have experiences like this one where a greater degree of planning and coordination between developers was required. I started by creating a model last week and writing as many tests as I thought would be relevant. I’ve been through TDD phases in the past; but I think in small teams and projects TDD offers diminishing returns. It makes a lot of sense in a scenario like this, even on a fairly small team, since I’m developing features that I can’t be able to test in the browser until the other developer has her features in place. She in turn won’t be able to know if the front end works until my code is merged into her branch. This feature was the bulk of my week but it came together in time for some Friday afternoon QA of which I’m sure there will be several things to fix on Monday morning.

Multi-tenancy with Phoenix and Elixir

There are lots of good places to start with multi-tenancy in Elixir (although I’d recommend Ecto’s own docs for either foreign keys or postgres schemas ). Most of the write-ups and tutorials start the same way “generate a new Phoenix application with mix phx.new “. While, this is great if your starting an enterprise SASS app from scratch but it leaves something to be desired if you, like I was, are migrating an existing codebase with thousands of users, and products to a multi tenant application. I recently went through this with an enterprise client and there were enough pitfalls and interesting problems to solve that it seemed to warrant a detailed post.

I believe the solution I put together is both effective and elegant but it is not without it’s pain points. Mainly, if you are going to use PostgreSQL schemas (which I did) you are going to have to migrate your existing data into said prefixes. There is no easy way around this, it’s just a slog you have to do; more on that later.

Schemas?

I went back and forth for a while, I finally settled on query prefixes as they felt a little more elegant; segmenting data without having to add new foreign keys to any columns. It also makes it easy to migrate or wipe customer data if needed. Admittedly, if your managing tens of thousands of tenants in a single database this approach will be a bottleneck. In my case that was not a concern; there are two current tenants and the client only expects to add a few tenants ever year if that. As mentioned, Ecto has great docs on setting up schemas; however I opted to use a dependency called Triplex mostly for the sake of time (about a week in I realized I could have rewritten most the required features in a day or two but we had about a month to make this transition so a refactor at this point seemed like overkill). Schemas work because we are using PostgreSQL, you can kind of hack together “schemas” with MySQL but under the veil it’s just separate databases, I can’t vouch for that approach because my Elixir projects are mostly in Postgres.

The first big hurdle is ensuring that your queries are run in the the right schema. By default Ecto is going to run queries in the public schema. On any given query you can change this by passing in a prefix: option, ie: Repo.one(query, prefix: "some_prefix"). Now rewriting hundreds or thousands of Repo actions with a variable prefix is not exactly convenient but it’s imperative to ensure queries are scoped to the correct schema. Just imagine the catastrophic breach if you had Customer A getting back Customer B’s data!

Thankfully you do not have to rewrite all your queries explicitly calling a prefix. There are some handy built-in behaviours from Ecto.Repo. Enter Repo hooks! Ecto.Repo comes with some great behaviours that allow one to effectively write Repo.one(query, prefix: "some_prefix") without actually writing it for every single query! You can implement prepare_query/3 which to filter and modify the prefix. You add these hooks to YourApp.Repo This is prepare_query/3 in it’s simplest form:

@impl true 
def prepare_query(_operation, query, opts) do 
	opts = Keyword.put(opts, :prefix, "some_prefix")
	{query, opts}
end

Now all queries will be looking at the some_prefix prefix rather than the public prefix. In our app we had a few tables that we wanted scoped to the public query? For example you may have an admins table, or possibly oban_jobs , tenants , etc. You can handle this in a few ways:

@impl true 
def prepare_query(_operation, query, opts) do 
	if opts[:skip_prefix] do 
		{query, opts}
	else 
		opts = Keyword.put(opts, :prefix, "some_prefix")
		{query, opts}
	end 
end

This works although it necessitates passing skip_prefix: true to all your Repo calls; likely fewer then before but still kind of defeating the purpose of prepare_query/3 .

@sources ~w[admins oban_jobs oban_peers customer_pricing]

@impl true 
def prepare_query(_operation, %Ecto.Query{from: %{source: {source, _}}} = query, opts) when source in @sources do 
	{query, opts}
end 

def prepare_query(_operation, query, opts) do 
... 
end

By pattern matching on your allowed tables you can bypass your prefix override. I used a combination of both of the above approaches with a list of allowed source tables as well as the option to skip_prefix which adds an manual override to the API. In theory you shouldn’t need it but you never know, tests, edge cases, shrugs…

Tenant Selection

At this point we’ve converted every query in the application to use a dynamic prefix in about 10 lines of code. Not bad but it’s also not dynamic, I’ve hard coded some_prefix into my queries. Before we make the actual hook dynamic we need to determine how Phoenix is going to recognize the tenant. There are many ways of doing this, in my case, for now, we are using subdomains.

Since the subdomain is available on the conn.host, I set up a plug to fetch the subdomain:

defmodule MyApp.TenantPlug 
...

def selct_organization_from_domain(conn, _opts) do 
	subdomain =  get_subdomain(conn) 
	put_session(conn, :tenant, subdomain)
end

defp get_subdomain(%{host: host}) do 
	[subdomain | _] = String.split(host, ".")
	subdomain
end

This gets the subdomain and puts it in the session (which is not strictly necessary but is nice to have). Next lets pass it to Repo; as with the queries, one need not rewrite all Repo calls passing in a :subdomain option, here Elixir/Phoenix has your back. In Phoenix, each browser session is a unique process and that process can pass data to itself. Back in Repo I added these little helpers:

@tenant_key {__MODULE__, :tenant}

def put_tenant_subdomain(subdomain) do 
	Process.put(@tennat_key, subdomain)
end	

def get_tenant_subdomain do 
	Process.get(@tenant_key)
end

Now back in the TennatPlug we can add the subdomain to the process:

def selct_organization_from_domain(conn, _opts) do 
	subdomain =  get_subdomain(conn)
	Repo.put_tenant_subdomain(subdomain) 
	put_session(conn, :tenant, subdomain)
end

A second Repo behaviour can be used to pass options to the Repo call: default_options/1 . Rather than explicitly writing opts = Keyword.put(opts, :prefix, "some_prefix") in the prepare_query/3 hook default_options/1 will set up your opts before the Repo function runs. From there we call get_tenant_subdomain/0 to retrieve the subdomain/query prefix we set in the plug:

@impl true 
def default_options(_operation) do 
	[prefix: get_tenant_subdomain()]
end 

@tenant_key {__MODULE__, :tenant_subdomain}
def get_tenant_subdomain, do: Process.get(@tenant_key)

Like prepare_query/3 , default_options/1 will run with every query.

With this implemented, navigating to a specific subdomain will set the tenant in the current process (as well as in the session) and any database queries in that session will be scoped to the tenant’s schema. Putting it all together we have something like this in repo.ex


@allowed_sources ~w[oban_jobs tenants]

  @impl true
  def default_options(_operation) do
    [prefix: get_tenant_subdomain.get()]
  end

  @impl true
  def prepare_query(_operation, %Ecto.Query{from: %{source: {source, _}}} = query, opts)
      when source in @allowed_sources do
    opts = Keyword.put(opts, :prefix, "public")
    {query, opts}
  end

  def prepare_query(_operation, query, opts) do 
  	if opts[:skip_prefix] do 
		{query, opts}
	else 
		opts = Keyword.put(opts, :prefix, "some_prefix")
		{query, opts}
	end 
  end 

  @tenant_key {__MODULE__, :tenant}

  def put_tenant_subdomain(subdomain) do 
	   Process.put(@tennat_key, subdomain)
  end	

  def get_tenant_subdomain do 
	   Process.get(@tenant_key)
  end

The simplified version of my tenant_selection_plug.ex looks like:

  def selct_organization_from_domain(conn, _opts) do 
	   subdomain =  get_subdomain(conn)
	   Repo.put_tenant_subdomain(subdomain) 
	   put_session(conn, :tenant, subdomain)
  end

  defp get_subdomain(%{host: host}) do 
   	[subdomain | _] = String.split(host, ".")
	  subdomain
  end
end

In production we are handling a lot more such as authorization with Guardian but this show how simple it is to get a subdomain and add it to the session. The above is a fairly bare-bones approach our final project had a lot more customization and ended up being organized a bit differently; for example, we extracted functions dealing with getting and setting @tenant_keys in the process to their own module. My hope is that the above lays the groundwork for anyone looking to do something similar.

Data Migration

I wish I had a solution half as slick as Ecto’s behaviours make querying database schemas. I was unable to find an elegant way to migrate relevant data to specific schemas so I was forced to do it with good old SQL.

-- compy customers
INSERT INTO salt_lake.locations SELECT * FROM public.locations WHERE id = 'salt_lake_location_id';

-- copy customers 
INSERT INTO salt_lake.customers SELECT * FROM public.customers WHERE location_id = 'salt_lake_location_id';

I had about 50 queries similar to this. Fortunately, tenants were mapped to locations and at the time of the migration the client only had two tenants (the system was migrating from a product business to a consulting business). I ran these queries twice replacing salt_lake with bakersfield on the second iteration. In my case due to the way the system was originally designed to work with an external system (look’en at you Quickbooks) and some changes the customer was making to how that system would be used this migration ended up being a bit more harry than expected. I had to write several ad-hoc queries that looked less like the above and more like:

INSERT INTO salt_lake.qb_orders SELECT qb.* FROM qb_orders qb JOIN orders o ON o.qb_order_id = qb.id JOIN customers c on o.customer_id = c.id WHERE NOT EXISTS (SELECT 1 FROM salt_lake.qb_orders slcqb WHERE slcqb.id = qb.id) AND c.name ILIKE '%A Problematic Customer%'

Again, that’s not the fault of the multi-tenancy setup, migrating data in any complex system is always going to have it’s prickly bits. If anyone has ideas for a more elegant migration pattern (first two queries, ignore the last one that an unfortunate specific), I’m all ears, shoot me an email self[at]travisfantina.com.

Today I Learned ~D[2025-06-02]

File this under “things I knew but have to look up everytime”…

If you want to spin up a Docker container without a service like postgres , for example if you had a fully seeded DB on your machine and didn’t want to go through the hassle of copying/re-seeding in Docker, you can do so with host.docker.internal. In docker-compose.yml you can write:

environment:
      - DB_HOST=host.docker.internal
      - DB_PORT=5432
      - DB_USERNAME=your_pg_user
      - DB_PASSWORD=your_password
      - DB_NAME=your_db

Because I switch projects a lot (agency life) there are occasions where a legacy codebase just stops working (system updates, depercations, etc.) at times like these I like falling back to the Docker container (upgrading the project is not always an option) but I may not want to loose/copy all my data from when I worked on the project before. Yes, I know dev data should be ephemeral and easy to reseed but in the real world this is not always how things work!