Skip to content

Add Elixir Registry support for Finch instance naming#366

Closed
meox wants to merge 1 commit intosneako:mainfrom
meox:registry_support
Closed

Add Elixir Registry support for Finch instance naming#366
meox wants to merge 1 commit intosneako:mainfrom
meox:registry_support

Conversation

@meox
Copy link
Copy Markdown

@meox meox commented Apr 12, 2026

Add Elixir Registry support for Finch instance naming

Problem

Finch only accepts atom() for the :name option in start_link/1. This forces
users who dynamically create Finch pools from runtime configuration (e.g., config
files, databases, multi-tenant setups) to use String.to_atom/1, which leaks atoms
since the BEAM never garbage-collects them. This is a well-known footgun for any
system with unbounded dynamic names.

Solution

Allow {:via, Registry, {registry, key}} tuples as the :name option, following
the standard Elixir convention for dynamic process naming without atom creation.

# Before (atom leak from dynamic input):
name = String.to_atom("finch_#{tenant_id}")
Finch.start_link(name: name)
Finch.request(req, name)

# After (safe, no atom creation from user input):
name = {:via, Registry, {MyApp.FinchRegistry, tenant_id}}
Finch.start_link(name: name)
Finch.request(req, name)

Design

Finch's internals (Elixir Registry.lookup/2, named ETS tables) require atom
identifiers — this is a hard constraint from the BEAM and Elixir's Registry
module (@type registry :: atom). The solution introduces a thin indirection
layer:

  1. Finch.NameRegistry — a shared ETS table (:finch_via_names) that maps
    {:via, Registry, {reg, key}} tuples to auto-generated internal atoms
    (e.g., :"Finch.Instance.42"). These atoms are bounded by the number of
    active Finch instances, not by user input.

  2. Finch.ViaCleaner — a tiny GenServer added as a child of the Finch
    supervision tree (only when using via tuples) that removes the ETS mapping
    entry on termination, preventing stale entries.

  3. resolve_finch_name/1 — called at the entry point of every public API
    function. For atoms it's a no-op passthrough; for via tuples it resolves to
    the internal atom via the ETS lookup.

What changes

File Change
lib/finch/name_registry.ex New — ETS-backed via→atom mapping (register/resolve/unregister)
lib/finch/via_cleaner.ex New — GenServer that cleans up ETS entry on shutdown
lib/finch.ex Update @type name(), start_link/1, init/1, finch_name!/1; add resolve_finch_name/1; wire resolution into all public API functions
lib/finch/pool/manager.ex Fix race in get_pool_supervisor/2: catch :noproc exit when supervisor dies between Registry lookup and Supervisor.count_children/1
test/finch_test.exs New "Registry-based naming" describe block with tests
test/finch/http1/pool_test.exs Fix flaky idle-timeout tests: increase assert_receive timeouts from 100-150ms to 500ms

What stays unchanged

  • All internal modules (Pool.Manager, Pool.Supervisor, HTTP1.Pool,
    HTTP2.Pool, PoolMetrics) — they only ever receive atoms after resolution
  • All existing atom-based naming — 100% backward compatible
  • Zero performance impact on atom-named instances (resolve_finch_name/1 for
    atoms is a single pattern match)

Tests

  • Start Finch with via tuple, make HTTP/1 request
  • Multiple via-tuple instances coexisting
  • get_pool_status/2 with via tuple name
  • Dynamic start_pool/3 with via tuple name
  • stop_pool/2 with via tuple name
  • get_pool_count/2 and set_pool_count/3 with via tuple name
  • ETS cleanup: entry removed after Finch stops
  • Supervisor discoverable via user's Registry
  • Validation: raises on invalid name shape

Flaky test fixes

  • get_pool_supervisor/2 race conditionSupervisor.count_children/1 was
    called on a PID obtained from Registry.lookup/2 without guarding against the
    supervisor dying between lookup and call. Fixed by catching :noproc exits and
    returning :not_found.

  • HTTP1 pool idle timeout testsassert_receive timeouts of 100-150ms were
    too tight under parallel test load. Increased to 500ms. The refute_receive
    timeouts (which test that something does NOT happen within a window) are left
    unchanged since they need to be tight to be meaningful.

Copy link
Copy Markdown
Collaborator

@NelsonVides NelsonVides left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question, this is intended to help with the case when you create dynamic Finch instances at runtime, but, wouldn't a single instance with dynamic pools do the work?

We've recently introduced this feature, it's already in main and to be released to hex soon enough. This PR introduces quite some complexity so I'd like to understand what does this solve that the new recent features don't 🤔

@meox
Copy link
Copy Markdown
Author

meox commented Apr 15, 2026

the main goal is to start the pool at runtime without generating atoms

@NelsonVides
Copy link
Copy Markdown
Collaborator

Yeah, but you can start pools dynamically at runtime without generating atoms already, see #345 and all the linked PRs, doesn't that exactly solve the issue already or is there anything missing? 🤔

@meox
Copy link
Copy Markdown
Author

meox commented Apr 15, 2026

Yeah, but you can start pools dynamically at runtime without generating atoms already, see #345 and all the linked PRs, doesn't that exactly solve the issue already or is there anything missing? 🤔

I missed that part so probably we have to adapt the Req lib to take advantage of that?

@meox meox closed this Apr 15, 2026
@NelsonVides
Copy link
Copy Markdown
Collaborator

It hasn't made it to a hex package yet, I think we're just missing out a Mint release here #341, merge that, and Finch should be ready. Jose and Wojtek are aware of these changes (Jose proposed them!) so Req is probably adapting to this too already, but I haven't followed Req's changes :)

Thanks a lot for your PR nevertheless!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants