Overview
Phoenix is Elixir's premier web framework, built for real-time, fault-tolerant applications. Combined with ModelRiver, you can build AI-powered LiveView UIs, channel-based chat, and webhook-driven processing pipelines.
What you get:
- AI-powered LiveView components with real-time streaming
- Channel-based chat with ModelRiver's WebSocket support
- Webhook receivers for async AI processing
- Fault-tolerant AI pipelines leveraging Elixir's supervision trees
Quick start
Add dependencies
ELIXIR
1# mix.exs2defp deps do3 [4 {:phoenix, "~> 1.7"},5 {:req, "~> 0.5"},6 {:jason, "~> 1.4"},7 ]8endCreate an AI client module
ELIXIR
1defmodule MyApp.AI do2 @base_url "https://api.modelriver.com/v1"3 @api_key System.get_env("MODELRIVER_API_KEY")4 5 def chat(workflow, messages, opts \\ []) do6 body = %{7 workflow: workflow,8 messages: messages9 }10 |> Map.merge(Map.new(opts))11 12 Req.post!("#{@base_url}/ai",13 json: body,14 headers: [{"authorization", "Bearer #{@api_key}"}]15 )16 |> Map.get(:body)17 end18endUse in a controller
ELIXIR
1defmodule MyAppWeb.ChatController do2 use MyAppWeb, :controller3 4 def create(conn, %{"message" => message}) do5 result = MyApp.AI.chat("my-chat-workflow", [6 %{role: "user", content: message}7 ])8 9 json(conn, result)10 end11endLiveView streaming
Build a real-time chat UI with LiveView:
ELIXIR
1defmodule MyAppWeb.ChatLive do2 use MyAppWeb, :live_view3 4 def mount(_params, _session, socket) do5 {:ok, assign(socket, messages: [], loading: false, response: "")}6 end7 8 def handle_event("send", %{"message" => message}, socket) do9 messages = socket.assigns.messages ++ [%{role: "user", content: message}]10 socket = assign(socket, messages: messages, loading: true, response: "")11 12 # Stream AI response in a task13 send(self(), {:generate, messages})14 {:noreply, socket}15 end16 17 def handle_info({:generate, messages}, socket) do18 task = Task.async(fn ->19 MyApp.AI.chat("my-chat-workflow", messages)20 end)21 22 result = Task.await(task, 30_000)23 content = get_in(result, ["data", "choices", Access.at(0), "message", "content"])24 25 messages = socket.assigns.messages ++ [%{role: "assistant", content: content}]26 {:noreply, assign(socket, messages: messages, loading: false)}27 end28endHEEX
1<div id="chat" class="flex flex-col gap-4">2 <%= for msg <- @messages do %>3 <div class={"p-3 rounded #{if msg.role == "user", do: "bg-blue-100", else: "bg-gray-100"}"}>4 <strong><%= msg.role %>:</strong> <%= msg.content %>5 </div>6 <% end %>7 8 <.form for={%{}} phx-submit="send">9 <input type="text" name="message" placeholder="Type a message..." class="w-full p-2 border rounded" />10 <button type="submit" disabled={@loading}>Send</button>11 </.form>12</div>Webhook receiver
Process async ModelRiver results via webhooks:
ELIXIR
1defmodule MyAppWeb.WebhookController do2 use MyAppWeb, :controller3 4 def handle(conn, params) do5 # Verify webhook signature6 signature = get_req_header(conn, "x-modelriver-signature") |> List.first()7 8 if verify_signature(conn, signature) do9 process_webhook(params)10 json(conn, %{status: "ok"})11 else12 conn |> put_status(401) |> json(%{error: "Invalid signature"})13 end14 end15 16 defp process_webhook(%{"event" => "request.completed"} = params) do17 # Handle completed AI request18 result = params["data"]19 IO.inspect(result, label: "AI result received")20 end21 22 defp verify_signature(conn, signature) do23 secret = System.get_env("MODELRIVER_WEBHOOK_SECRET")24 {:ok, body, _conn} = Plug.Conn.read_body(conn)25 expected = :crypto.mac(:hmac, :sha256, secret, body) |> Base.encode16(case: :lower)26 Plug.Crypto.secure_compare(expected, signature)27 end28endGenServer for background AI tasks
ELIXIR
1defmodule MyApp.AIWorker do2 use GenServer3 4 def start_link(opts), do: GenServer.start_link(__MODULE__, opts, name: __MODULE__)5 def init(_opts), do: {:ok, %{}}6 7 def generate(workflow, messages) do8 GenServer.cast(__MODULE__, {:generate, workflow, messages, self()})9 end10 11 def handle_cast({:generate, workflow, messages, caller}, state) do12 Task.start(fn ->13 result = MyApp.AI.chat(workflow, messages)14 send(caller, {:ai_result, result})15 end)16 {:noreply, state}17 end18endBest practices
- Use Tasks for AI calls: Never block your LiveView process
- Add timeouts: AI calls can take seconds; set appropriate
Task.awaittimeouts - Leverage supervision trees: Let Elixir's OTP handle failures gracefully
- Use webhooks for heavy processing: Batch jobs should use the async API
- Monitor in Request Logs: Track costs and latencies in Observability
Next steps
- Next.js integration: React-based alternative
- FastAPI integration: Python backend
- Webhooks: Async processing guide