Skip to content

Draft: Finch everywhere

feld requested to merge finch_everywhere into develop

After spending some time analyzing all of our ReverseProxy code, here are the highlights of how the Reverse Proxy client works, ignoring the Gun/Hackney specific bits:

  • Client opens a connection to Pleroma to proxy a resource (ReverseProxy.call/3)
  • Pleroma opens a connection to the destination HTTP server in a chunking mode (ReverseProxy.request/4)
  • Pleroma immediately receives the HTTP headers and a Reference to the process (http client connection)
  • At this point we check the HTTP status code, then the content-length header to see if the file is too big (ReverseProxy.header_length_constraint/2)
  • We start a response to the Client (ReverseProxy.response/4) which begins reading chunks and streaming them to the Client from Phoenix (send_chunked/2 |> ReverseProxy.chunk_reply/3)
  • As we read chunks we manually track time we've been downloading the content to make sure it doesn't exceed our limit (max_read_duration) and we also count the bytes (ReverseProxy.body_size_constraint/2) to make sure the file isn't exceeding the limit because the content-length header was incorrect

This is pretty heavy and also includes some manual management of the connection pool. The intention is to stream the response to the client as we download it to try to obscure any delay.

Unfortunately we cannot do this with Finch through Tesla as no streaming of the response/body is supported in the Tesla adapter and it might not actually be possible in a way that meets our exact needs. Finch streams internally for performance and gives us no granular control over the connection pool, but they exposed streaming with Finch.stream/5. If we wanted to try to do this with Finch it would kind of look like this

# stream/5's @spec
#
# stream(Finch.Request.t(), name(), acc, stream(acc), keyword()) :: 
# {:ok, acc} | {:error, Exception.t()} when acc: term()

acc = {nil, [], []}

fun = fn
  {:status, value}, {_, headers, body} -> {value, headers, body}
  {:headers, value}, {status, headers, body} -> {status, headers ++ value, body}
  {:data, value}, {status, headers, body} -> {status, headers, [value | body]}
end

req = Finch.build(:get, "https://blog.feld.me/", [], nil, [])

{:ok, %{FinchResponse{status: status, headers: headers, body: body}} = Finch.stream(req, MyFinch, acc, fun, [])

So if we want to stream the data to the client incrementally we need to find a way to hook that into the fun/2 we pass to Finch.stream/5. It's not impossible, but it feels needlessly complex.

I'd rather opt for us to not do this right now and to roll back this premature optimization which is only useful if you do not have the MediaProxy enabled. The idea is sound, but I think it fails in reality for these reasons:

  1. most files on the fediverse are restricted in size, like 8MB or 16MB max for Pleroma/Mastodon
  2. most fediverse servers have plenty of bandwidth, so downloading media from a remote server should be pretty fast
  3. the "client" we are streaming to when MediaProxy is enabled is Nginx/Varnish or similar, and it's likely to be < 3ms away if it's on the local network, or < 1ms when it's a process on the same server
  4. so really we're talking about a brief pause of (?? 250ms? seems reasonable as a worst case unless the remote server is really slow) for a newly encountered file for the first request only, after which Nginx/Varnish will have the file cached which it will send to clients with no latency. And if you have MRF.MediaProxyWarmingPolicy enabled it will be hot in your Nginx/Varnish cache before any client ever requests it.

So that said, it seems like the sanest solution for now is to roll all of that back, move forward with Finch, and push every request through Pleroma.HTTP.request/5 which this MR does and give up on the chunking.

Edited by feld

Merge request reports