perf(cloud): shrink fal cold start by ~20s#1024
Conversation
Three changes that target the cold-start path measured in fal.log: - livepeer_fal_app.py: drop `requirements = [...]` so fal isolate stops provisioning a separate venv on top of the image's. The image already has websockets/httpx/aiokafka via the kafka extra below. - Dockerfile.cloud: pre-build the venv with `uv sync --extra livepeer --extra kafka --no-dev` so the runtime `uv run --extra livepeer --extra kafka livepeer-runner` is a no-op instead of fetching + installing aiokafka, uvloop, and re-resolving ~50 packages. - livepeer_fal_app.py dockerfile_str: re-sync after `COPY src/` so the daydream-scope editable install is refreshed at image build time, eliminating the "Built daydream-scope @ file:///app" rebuild on every cold start. - livepeer_app.py lifespan: pre-warm the pipeline registry so the torch/diffusers/transformers/torchao import cascade runs at runner startup instead of on the first cloud-proxy call (~8s shifted off the user-perceived connect path). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Signed-off-by: emranemran <emran.mah@gmail.com>
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🚀 fal.ai Preview Deployment
Testing on Cloud |
Empty `requirements` broke the wrapper at first websocket — fal isolate runs the App's setup() / websocket handler in a venv separate from the image's /app/.venv (under /usr/local/lib/python3.12/dist-packages), where httpx wasn't available, so check_runner_readiness raised ModuleNotFoundError on every connect. The ~10s cold-start tax that motivated emptying requirements was actually `uv run --extra livepeer --extra kafka` resyncing the image venv, not isolate. The Dockerfile.cloud `uv sync --extra livepeer --extra kafka --no-dev` and the fal-side dockerfile_str re-sync from the previous commit already address that on their own. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Signed-off-by: emranemran <emran.mah@gmail.com>
Two low-risk image-size reductions stacked together: 1. Switch base from `cuda:12.8.0-cudnn-runtime-ubuntu24.04` to plain `cuda:12.8.0-runtime-ubuntu24.04`. The pyproject.toml override `nvidia-cudnn-cu12>=9.15` already ships cuDNN via pip into the venv, so the base image's cuDNN was dead weight (~700 MB). 2. After `uv sync` and the bundled-plugin install, strip files that are never read at runtime: C/C++ headers/sources (only used to compile extensions), package tests/ and docs/ directories, .pdb debug symbols, and the leftover uv/apt caches. Keep .pyc / __pycache__ so first import isn't slowed by recompilation, and keep examples/ because some packages import from it. Estimated combined savings: ~700 MB (cuDNN duplication) + 200-500 MB (strip) = ~1 GB compressed off the 7.43 GB image. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Signed-off-by: emranemran <emran.mah@gmail.com>
Two aggressive image-size reductions stacked: 1. Multi-stage build: build deps (curl/git/build-essential/ software-properties-common/python3-dev) live in the builder stage and never reach the final image. Final stage installs only the runtime libs (libgl1/libglib2.0-0/libsm6/libxext6/libxrender-dev/ libgomp1) and copies the venv + uv binary + uv-managed Python from the builder. 2. Strip duplicated CUDA libs from the venv. Torch on Linux brings in ~2 GB of nvidia-*-cu12 packages whose .so files are already in the base image at /usr/local/cuda/lib64. We `rm -rf` the lib dirs for cublas, cufft, curand, cusolver, cusparse, cuda_runtime, and cuda_nvrtc — keeping their dist-info so uv treats them as installed in the fal-side `uv sync`. LD_LIBRARY_PATH is set so torch finds them in /usr/local/cuda/lib64 at runtime. Kept in the venv: cuDNN (override-pinned newer than the base ships), cusparselt (not in base), nccl/cupti/nvtx/nvjitlink/nvshmem (small or version-sensitive). Estimated combined savings: ~1.2-1.5 GB compressed off the 6.75 GB post-base-swap image. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Signed-off-by: emranemran <emran.mah@gmail.com>
fal's image extension runs `check_python.sh 3.12 python3.12` which looks for python3.12 on PATH. The multi-stage rewrite dropped python3-dev (which used to pull python3.12 into /usr/bin), causing the deploy to fail with "the Docker image does not have a python3.12 executable". Re-add just python3.12 (the apt package, not python3-dev) so the check passes; the actual app keeps using the uv-managed Python from /root/.local/share/uv via uv run. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Signed-off-by: emranemran <emran.mah@gmail.com>
After dropping curl/git from the runtime stage in the multi-stage rewrite, fal's image-extension `install_uv.sh` failed at deploy time with "curl is not installed". fal seems to (re)install uv even when the image already ships one, so curl + ca-certs are mandatory. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Signed-off-by: emranemran <emran.mah@gmail.com>
Summary
Three changes to the fal cold-start path, derived from a real cold-start trace in
fal.log:livepeer_fal_app.py: droprequirements = [...]so fal isolate stops provisioning a separate venv on top of the image's. Combined with the Dockerfile change below, removes the ~10–12sDownloaded transformers / Uninstalled 50 packages / Installed 56 packagesblock visible on every cold start.Dockerfile.cloud: pre-build the venv withuv sync --extra livepeer --extra kafka --no-devso the runtimeuv run --extra livepeer --extra kafka livepeer-runneris a no-op.livepeer_fal_app.pydockerfile_str: re-sync afterCOPY src/so thedaydream-scopeeditable install is refreshed at image build time (kills theBuilt daydream-scope @ file:///apprebuild on every cold start).livepeer_app.pylifespan: pre-warm the pipeline registry so the torch/diffusers/transformers/torchao import cascade runs at runner startup instead of on the first cloud-proxy call (~8s shifted off the user-perceived connect path).Combined expected savings: ~20s of user-visible cold start.
Test plan
uv run ruff check src/scope/cloud/livepeer_app.py src/scope/cloud/livepeer_fal_app.py— cleanuv run ruff format --check— already formattedpy_compile)falSDK loadslivepeer_fal_app.pyand reportsrequirements = []scope-livepeer-emran, force a cold start, capture runner logs, compare phase deltas vs the baselinefal.log. The "Downloaded / Uninstalled / Installed" block should be gone, andRegistry initializedshould fire before the first WS handshake.🤖 Generated with Claude Code