AI-powered railway level crossing safety system using infrared-optical image fusion, real-time object detection, depth estimation, and ROI segmentation — built for Pakistan Railway.
Pakistan Railway is the country's most affordable means of public transport, carrying over 52.2 million passengers annually and averaging 178,000 passengers per day across 28 mail, express, and passenger train lines.
Despite its critical role, safety remains a serious concern. Over a five-year period, 537 train accidents occurred — 313 of which resulted in loss of life or serious injury. Factorial analysis reveals that 32% of these accidents happened at unmanned level crossings, where the responsibility fell on road users.
Traditional optical camera-based detection systems fail under adverse conditions — low light, fog, rain, and nighttime — leaving a dangerous blind spot in the safety net.
Saferail-AI was built to close that gap.
Saferail-AI combines infrared (thermal) and optical (RGB) camera feeds to overcome the limitations of single-modality vision:
- Infrared Imaging — Captures ambient thermal radiation emitted by objects, making them visible regardless of lighting conditions.
- Image Fusion (TarDAL) — Fuses infrared and optical frames to produce rich, all-weather imagery that preserves both thermal and textural detail.
- Object Detection (YOLOv5) — Runs real-time detection on the fused frames to identify people, vehicles, and obstacles on or near the track.
- Depth / Distance Estimation — Estimates the distance of detected objects from the train's front camera.
- ROI Segmentation — Isolates the track region of interest to filter irrelevant detections and focus on objects that pose an actual collision risk.
- MMI Driver Alert — Displays classified object type and its distance on a Man–Machine Interface (MMI) screen in the driver's cabin, triggering timely warnings and control actions.
- All-weather, day/night operation via infrared-optical sensor fusion
- Real-time object detection optimised for edge hardware
- Distance estimation to quantify collision risk
- Track ROI segmentation to eliminate false positives
- TensorRT / INT8 quantisation support for Jetson deployment
- RTSP stream support for live camera feeds
- Socket-based streaming for remote MMI display
- ONNX export for cross-platform model portability
| Component | Specification |
|---|---|
| Edge SoC | NVIDIA Jetson Orin Nano 8GB |
| RGB Camera | Any compatible USB / CSI camera |
| Infrared Camera | Thermal/IR camera with video output |
| Storage | ≥ 32 GB (microSD or NVMe SSD recommended) |
| OS | Ubuntu 20.04 / JetPack 5.x |
Note: The system can also be run on a standard desktop/laptop GPU for development and testing. Jetson-specific instructions are provided in the Deployment section.
- Python 3.8+
gitwith submodule support- CUDA toolkit compatible with your environment
git clone --recurse-submodules https://github.com/mfaizan-ai/Saferail-AI
cd Saferail-AIIf you already cloned without
--recurse-submodules, initialise submodules manually:git submodule update --init --recursive
python3 -m venv saferail
source saferail/bin/activate # Linux / macOS
# saferail\Scripts\activate.bat # Windowspip install --upgrade pippip install -r requirements.txtPyTorch must be installed separately to match your hardware:
Standard GPU (desktop/laptop):
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118NVIDIA Jetson (JetPack): Follow the official NVIDIA guide for your JetPack version: 👉 https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html
python app_final.pypython app_final_rtsp.py --source rtsp://<camera-ip>:<port>/streampython app_socket_final.py --host <mmi-host-ip> --port 5005python track_roi.py --source <video_path_or_camera_index>python generate_onnx.py --weights detection_weights/weights/<model>.pt --output model.onnxpython run_trt_inference.py --engine model.engine --source <video_path>Place pre-trained weights in the appropriate directories before running:
| Model | Directory |
|---|---|
| YOLOv5 detection weights | detection_weights/weights/ |
| Segmentation weights | segmentation_weights/pt_weights/ |
| TarDAL fusion weights | TarDAL/ (managed as submodule) |
Refer to each module's documentation or contact the maintainers for access to trained weights.
Download and flash JetPack 5.x from the NVIDIA SDK Manager.
# Follow NVIDIA's official guide:
# https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.htmlpython generate_onnx.py --weights detection_weights/weights/best.pt
trtexec --onnx=model.onnx --saveEngine=model.engine --fp16For INT8 quantisation (maximum performance):
trtexec --onnx=model.onnx --saveEngine=model.engine --int8 \
--calib=int8_calibration/python run_trt_inference.py --engine model.engine --source /dev/video0