Clustering with Phoenix 1.7

byronsalty

Byron Salty

Posted on August 8, 2023

Clustering with Phoenix 1.7

There several good articles about how to setup a Phoenix cluster with PubSub messaging through the nodes but I found them to be incomplete or slightly out of date so in this article I plan to give step by step instructions on how to create a clustered Phoenix 1.7 application in 2023.

The goal of this article will be to run multiple instances of our Phoenix which seamlessly can do message passing between nodes.

The target production environment will be Fly.io for this article, but I'll also show how to setup your project for local development as well so you can make sure you are subscribing and broadcasting correctly.

Simplest Demo Project

We're going to create a stripped down Phoenix app that does only two things:

  • Exposes a live view that displays messages received
  • Has an API end-point to receive external messages via HTTP

Create project

Note - we will build this project step by step but feel free to check out the completed project here

Setup Phoenix app

Phoenix new with most things removed:

mix phx.new talkie --no-ecto --no-mailer
Enter fullscreen mode Exit fullscreen mode

Update the config/dev.exs file to allow passing in a PORT since we'll need to run two instances:

# Before
config :talkie, TalkieWeb.Endpoint,
    http: [ip: {127, 0, 0, 1}, port: 4000],
    ...

# After
port = String.to_integer(System.get_env("PORT") || "5000")

config :talkie, TalkieWeb.Endpoint,
    http: [ip: {127, 0, 0, 1}, port: port],
    ...
Enter fullscreen mode Exit fullscreen mode

Create the ping API endpoint:

# Add this controller
defmodule TalkieWeb.APIController do
    use TalkieWeb, :controller

    def ping(conn, _params) do
        # Not doing anything but responding so far
        json(conn, %{pong: true})
    end
end
Enter fullscreen mode Exit fullscreen mode

Add the ping route to your router.ex inside the existing /api scope:

scope "/api", TalkieWeb do
    pipe_through :api
    get "/ping", APIController, :ping
end
Enter fullscreen mode Exit fullscreen mode

Test it:

curl http://127.0.0.1:5000/api/ping

# {"pong": true}
Enter fullscreen mode Exit fullscreen mode

Create the Liveview viewer

Add a LiveView module like this

defmodule TalkieWeb.ViewerLive.Index do
    use TalkieWeb, :live_view

    @impl true
    def render(assigns) do
        ~H"""
        <h1>Messages</h1>
        <%= for msg <- @messages do %>
            <span><%= msg %></span>
        <% end %>
        """
    end

    @impl true
    def mount(_params, _session, socket) do
        {:ok, assign(socket, :messages, [])}
    end

    @impl true
    def handle_info({:message, msg}, socket) do
        {:noreply, assign(socket, 
            :messages, [msg | socket.assigns.messages])}
    end

    @impl true
    def handle_info(
        %Phoenix.Socket.Broadcast{
            topic: "messages",
            event: "ping",
            payload: {:message, msg}
        }, socket) do
        handle_info({:message, msg}, socket)
    end
end
Enter fullscreen mode Exit fullscreen mode

And add this line to your router.ex:

    # in the browser scope
    live "/viewer", ViewerLive.Index
Enter fullscreen mode Exit fullscreen mode

Test it by pointing your browser at (http://127.0.0.1:5000/viewer)

Listen for messages on One instance

Now that we have those components. Let's wire them together with a single instance.

We're going to have our Viewer listen for messages, and our API broadcast whenever the /ping api is hit.

You "listen" by subscribing to a topic. Update the liveview mount to look like this now:

def mount(_params, _session, socket) do
    TalkieWeb.Endpoint.subscribe("messages")
    {:ok, assign(socket, :messages, ["test message..."])}
end
Enter fullscreen mode Exit fullscreen mode

And you broadcast by adding the broadcast to the ping function this the API Controller:

def ping(conn, _params) do
    Phoenix.PubSub.broadcast!(Talkie.PubSub, "messages",
        %Phoenix.Socket.Broadcast{
            topic: "messages",
            event: "ping",
            payload: {:message, "ping"}
        }
    )
    json(conn, %{pong: true})
end
Enter fullscreen mode Exit fullscreen mode

Test it

Start the server:

PORT=5000 mix phx.server
Enter fullscreen mode Exit fullscreen mode

Open a browser to: (http://127.0.0.1:5000/viewer)

Hit the ping api again:

curl http://127.0.0.1:5000/api/ping
Enter fullscreen mode Exit fullscreen mode

This should cause the Viewer to display a "ping" message instantly.

Clustering for local dev

Now let's see clustering NOT work first. Open up two instances like this:

PORT=5000 elixir --name a@127.0.0.1 -S mix phx.server
PORT=5001 elixir --name b@127.0.0.1 -S mix phx.server
Enter fullscreen mode Exit fullscreen mode

Bring up a Viewer pointed to both servers:
(http://127.0.0.1:5000/viewer)
(http://127.0.0.1:5001/viewer)

And if you run the curl on the ping API you'll see that only one viewer will update.

Add libcluster to your dependencies in mix.exs and rerun mix deps.get

{:libcluster, "~> 3.3"},
Enter fullscreen mode Exit fullscreen mode

Add the libcluster config to your config/dev.exs:

config :libcluster,
    topologies: [
        example: [
            strategy: Cluster.Strategy.Epmd,
            config: [hosts: [:"a@127.0.0.1", :"b@127.0.0.1"]],
            connect: {:net_kernel, :connect_node, []},
            disconnect: {:erlang, :disconnect_node, []},
            list_nodes: {:erlang, :nodes, [:connected]},
        ]
    ]
Enter fullscreen mode Exit fullscreen mode

Update your application.ex

You will define a topologies variable in your start() function and add a Cluster.Supervisor to the list of children.

It will look like:

def start(_type, _args) do
    topologies = Application.get_env(:libcluster, :topologies) || []

    children = [
        ...
        {Cluster.Supervisor, [topologies, [name: Talkie.ClusterSupervisor]]}
    ]

    ...
Enter fullscreen mode Exit fullscreen mode

Now if you restart both of the instances and test again you SHOULD see both Viewers update when you ping either one of the instances.

Pretty cool right!

Clustering with Fly.io

First, deploy the app as-is to Fly and test that you see it not working as expected again.

What you should see (assuming you default to 2 instances) is a 50/50 chance that you'll receive the ping message if you pull up a /viewer page and then hit the ping API. A rough test is to bring up 2-4 separate windows and to the /viewer path and test a ping to see how many of the windows update.

Add libcluster configuration to prod in the config/runtime.exs file

app_name = System.get_env("FLY_APP_NAME") || "talkie"

config :libcluster,
    debug: true,
    topologies: [
        fly6pn: [
            strategy: Cluster.Strategy.DNSPoll,
            config: [
                polling_interval: 5_000,
                query: "#{app_name}.internal",
                node_basename: app_name
            ]
        ]
    ]
Enter fullscreen mode Exit fullscreen mode

Change how the RELEASE_NODE value is set in the rel/env.sh.eex

Note if you don't have this file, then you just haven't completed fly deploy yet. Do that first.

#Before
export RELEASE_NODE="${FLY_APP_NAME}-${FLY_IMAGE_REF##*-}@${FLY_PRIVATE_IP}"

#After
ip=$(grep fly-local-6pn /etc/hosts | cut -f 1)
export RELEASE_NODE="${FLY_APP_NAME}@${ip}"
Enter fullscreen mode Exit fullscreen mode

See more from Fly.io guide

See the full project

Check out the Github Repofor the project used to test this article.

Other Resources

There are some good examples here:

Well written article from Alvise Susmel that helped me a lot

💖 💪 🙅 🚩
byronsalty
Byron Salty

Posted on August 8, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

Clustering with Phoenix 1.7
pubsub Clustering with Phoenix 1.7

August 8, 2023