Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 17 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,18 +52,32 @@ ray job submit --address=http://localhost:8265 --working-dir . -- python hello_r
source ~/pw/activate-ray/ray_venv/bin/activate

# Submit job through the session tunnel (uploads code via --working-dir)
ray job submit --address=http://localhost:8888 --working-dir . -- python your_script.py
ray job submit --address=http://localhost:8888 --working-dir . -- python hello_ray.py
```

**From your local machine** (via SSH tunnel):
```bash
# Set up tunnel first
ssh -L 8888:localhost:8888 -o ProxyCommand="pw ssh --proxy-command %h" $USER@workspace
ssh -i ~/.ssh/pwcli -L 8888:localhost:8888 -o ProxyCommand="pw ssh --proxy-command %h" $USER@workspace

# Then submit jobs (uploads code via --working-dir)
ray job submit --address=http://localhost:8888 --working-dir . -- python your_script.py
ray job submit --address=http://localhost:8888 --working-dir . -- python hello_ray.py
```

### Installing Additional Packages

The default Ray installation includes only `ray[default]`. For GPU workloads with PyTorch or TensorFlow:

```bash
# Install PyTorch with CUDA support (uv is pre-installed by the workflow)
uv pip install --python ~/pw/activate-ray/ray_venv/bin/python torch

# Or install TensorFlow
uv pip install --python ~/pw/activate-ray/ray_venv/bin/python tensorflow
Comment on lines +75 to +76
Copy link

Copilot AI Jan 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says "Install PyTorch with CUDA support" but the command pip install torch will install the CPU-only version by default. To install PyTorch with CUDA support, you need to specify the appropriate PyTorch index URL or follow PyTorch's installation instructions for CUDA-enabled versions. Consider either:

  1. Updating the command to include CUDA support (e.g., using PyTorch's recommended installation command with the correct index URL)
  2. Changing the comment to clarify that this installs the default version and users should consult PyTorch documentation for CUDA-specific installations
  3. Removing the "with CUDA support" phrase from the comment if the intention is just to install the default version

Copilot uses AI. Check for mistakes.
```

> **Note:** The `examples/gpu_workload.py` script works without PyTorch but will show "PyTorch not available" messages. Install PyTorch to enable GPU matrix computations in the example.

### Connect Programmatically

```python
Expand Down
2 changes: 1 addition & 1 deletion workflow.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ jobs:
echo "Ray Dashboard Session is ready!"
echo "=========================================="
echo " Proxy URL: https://${PW_PLATFORM_HOST}${basepath}/"
echo " Local tunnel: ssh -L ${port}:localhost:${port} -o ProxyCommand=\"pw ssh --proxy-command %h\" ${PW_USER}@workspace"
echo " Local tunnel: ssh -i ~/.ssh/pwcli -L ${port}:localhost:${port} -o ProxyCommand=\"pw ssh --proxy-command %h\" ${PW_USER}@workspace"
echo " Then access: http://localhost:${port}/"
echo "=========================================="

Expand Down