Documentation

Phoenix + ModelRiver

Real-time AI applications with Elixir's Phoenix framework. LiveView streaming, channel-based AI chat, and webhook-driven async processing.

Overview

Phoenix is Elixir's premier web framework, built for real-time, fault-tolerant applications. Combined with ModelRiver, you can build AI-powered LiveView UIs, channel-based chat, and webhook-driven processing pipelines.

What you get:

  • AI-powered LiveView components with real-time streaming
  • Channel-based chat with ModelRiver's WebSocket support
  • Webhook receivers for async AI processing
  • Fault-tolerant AI pipelines leveraging Elixir's supervision trees

Quick start

Add dependencies

ELIXIR
1# mix.exs
2defp deps do
3 [
4 {:phoenix, "~> 1.7"},
5 {:req, "~> 0.5"},
6 {:jason, "~> 1.4"},
7 ]
8end

Create an AI client module

ELIXIR
1defmodule MyApp.AI do
2 @base_url "https://api.modelriver.com/v1"
3 @api_key System.get_env("MODELRIVER_API_KEY")
4 
5 def chat(workflow, messages, opts \\ []) do
6 body = %{
7 workflow: workflow,
8 messages: messages
9 }
10 |> Map.merge(Map.new(opts))
11 
12 Req.post!("#{@base_url}/ai",
13 json: body,
14 headers: [{"authorization", "Bearer #{@api_key}"}]
15 )
16 |> Map.get(:body)
17 end
18end

Use in a controller

ELIXIR
1defmodule MyAppWeb.ChatController do
2 use MyAppWeb, :controller
3 
4 def create(conn, %{"message" => message}) do
5 result = MyApp.AI.chat("my-chat-workflow", [
6 %{role: "user", content: message}
7 ])
8 
9 json(conn, result)
10 end
11end

LiveView streaming

Build a real-time chat UI with LiveView:

ELIXIR
1defmodule MyAppWeb.ChatLive do
2 use MyAppWeb, :live_view
3 
4 def mount(_params, _session, socket) do
5 {:ok, assign(socket, messages: [], loading: false, response: "")}
6 end
7 
8 def handle_event("send", %{"message" => message}, socket) do
9 messages = socket.assigns.messages ++ [%{role: "user", content: message}]
10 socket = assign(socket, messages: messages, loading: true, response: "")
11 
12 # Stream AI response in a task
13 send(self(), {:generate, messages})
14 {:noreply, socket}
15 end
16 
17 def handle_info({:generate, messages}, socket) do
18 task = Task.async(fn ->
19 MyApp.AI.chat("my-chat-workflow", messages)
20 end)
21 
22 result = Task.await(task, 30_000)
23 content = get_in(result, ["data", "choices", Access.at(0), "message", "content"])
24 
25 messages = socket.assigns.messages ++ [%{role: "assistant", content: content}]
26 {:noreply, assign(socket, messages: messages, loading: false)}
27 end
28end
HEEX
1<div id="chat" class="flex flex-col gap-4">
2 <%= for msg <- @messages do %>
3 <div class={"p-3 rounded #{if msg.role == "user", do: "bg-blue-100", else: "bg-gray-100"}"}>
4 <strong><%= msg.role %>:</strong> <%= msg.content %>
5 </div>
6 <% end %>
7 
8 <.form for={%{}} phx-submit="send">
9 <input type="text" name="message" placeholder="Type a message..." class="w-full p-2 border rounded" />
10 <button type="submit" disabled={@loading}>Send</button>
11 </.form>
12</div>

Webhook receiver

Process async ModelRiver results via webhooks:

ELIXIR
1defmodule MyAppWeb.WebhookController do
2 use MyAppWeb, :controller
3 
4 def handle(conn, params) do
5 # Verify webhook signature
6 signature = get_req_header(conn, "x-modelriver-signature") |> List.first()
7 
8 if verify_signature(conn, signature) do
9 process_webhook(params)
10 json(conn, %{status: "ok"})
11 else
12 conn |> put_status(401) |> json(%{error: "Invalid signature"})
13 end
14 end
15 
16 defp process_webhook(%{"event" => "request.completed"} = params) do
17 # Handle completed AI request
18 result = params["data"]
19 IO.inspect(result, label: "AI result received")
20 end
21 
22 defp verify_signature(conn, signature) do
23 secret = System.get_env("MODELRIVER_WEBHOOK_SECRET")
24 {:ok, body, _conn} = Plug.Conn.read_body(conn)
25 expected = :crypto.mac(:hmac, :sha256, secret, body) |> Base.encode16(case: :lower)
26 Plug.Crypto.secure_compare(expected, signature)
27 end
28end

GenServer for background AI tasks

ELIXIR
1defmodule MyApp.AIWorker do
2 use GenServer
3 
4 def start_link(opts), do: GenServer.start_link(__MODULE__, opts, name: __MODULE__)
5 def init(_opts), do: {:ok, %{}}
6 
7 def generate(workflow, messages) do
8 GenServer.cast(__MODULE__, {:generate, workflow, messages, self()})
9 end
10 
11 def handle_cast({:generate, workflow, messages, caller}, state) do
12 Task.start(fn ->
13 result = MyApp.AI.chat(workflow, messages)
14 send(caller, {:ai_result, result})
15 end)
16 {:noreply, state}
17 end
18end

Best practices

  1. Use Tasks for AI calls: Never block your LiveView process
  2. Add timeouts: AI calls can take seconds; set appropriate Task.await timeouts
  3. Leverage supervision trees: Let Elixir's OTP handle failures gracefully
  4. Use webhooks for heavy processing: Batch jobs should use the async API
  5. Monitor in Request Logs: Track costs and latencies in Observability

Next steps