Keran Lia, Shuhuai Yea, Jintao Daia, Anlin Maa, Wen Laib,+, Xiumian Hua,+
aState Key Laboratory of Critical Earth Material Cycling and Mineral Deposits, Frontiers Science Center for Critical Earth Material Cycling, School of Earth Sciences and Engineering, Nanjing University, Nanjing, 210023, China
bGannan Normal University
+Corresponding authors
FastMeasure is a professional tool for processing rock microscopic images, automatically detecting and segmenting grains. This project is inspired by and builds upon segmenteverygrain by ZoltΓ‘n Sylvester. We appreciate the excellent work done by the segmenteverygrain team in developing a U-Net + SAM based grain segmentation solution for geomorphology and sedimentary geology research. FastMeasure introduces YOLO-based detection, multiple SAM variants, automatic scale detection, and enhanced geometric analysis. Based on deep learning technology, the system supports two model combinations: YOLO+FastSAM and YOLO+MobileSAM, combined with intelligent scale bar detection and rich geometric parameter calculation, enabling precise extraction of grain information from rock microscopic images and generation of complete statistical analysis reports.
While segmenteverygrain pioneered the use of SAM for grain segmentation, FastMeasure takes a different approach and introduces several enhancements:
| Feature | segmenteverygrain | FastMeasure |
|---|---|---|
| Detection Model | U-Net (patch-based CNN) | YOLO (real-time object detection) |
| SAM Variants | SAM 2.1 only | FastSAM + MobileSAM |
| Processing Speed | ~2.5 min for 3MP image | ~0.3s for FastSAM (GPU) |
| Scale Calibration | Manual (Shift+drag) | Automatic + Manual calibration |
| Geometric Parameters | Basic shape metrics | 10+ parameters including fractal dimension, angularity |
| Interactive Mode | Jupyter notebook based | Standalone GUI with unified key controls |
| Batch Processing | Notebook-based | Command-line batch processing |
| Model Fine-tuning | U-Net (TensorFlow) | YOLO (Ultralytics, easier) |
| Training Data | Manual annotation | Auto from interactive results |
| Code Structure | Notebook + modules | Modular core library with CLI |
| Feature | FastSAM | MobileSAM |
|---|---|---|
| Installation | Easy (pip install) | Requires GitHub access |
| Speed | β‘ Very Fast (~0.3s GPU) | π’ Slower (~3.7s GPU) |
| Precision | Good | Better |
| Interactive | β Supported | β Supported |
| Recommendation | Default choice | When precision matters |
Recommendation: Start with FastSAM (easier installation, faster). Install MobileSAM later if you need higher precision.
Minimum requirements (FastSAM only):
- Python 3.10+
- PyTorch >= 2.3.0 (supports NumPy 2.x)
- Ultralytics, OpenCV, NumPy, Pandas
- ~18 pip packages total
Optional (MobileSAM):
- MobileSAM from GitHub
- timm >= 0.9.0 (auto-installed with MobileSAM)
See Installation Guide for detailed instructions.
The system supports three usage modes:
- Auto Processing Mode: YOLO detection + SAM auto segmentation
- Batch Processing Mode: Batch processing of all images in a folder
- Interactive Mode: Manual point selection for fine segmentation via GUI
Similar to segmenteverygrain's U-Net fine-tuning, FastMeasure supports YOLO model fine-tuning to improve detection accuracy on your specific rock types:
# Quick fine-tune from interactive segmentation results
python utils/train_yolo.py --mode quick --input results/mobilesam/interactive/
# The fine-tuned model can then be used for better detectionSee Model Training Guide below for detailed instructions.
| Model Combination | Features | Applicable Scenarios |
|---|---|---|
| YOLO + FastSAM | Fast, lightweight | Large batch quick processing |
| YOLO + MobileSAM | High precision, supports interaction | High precision requirements, interactive annotation |
- Automatically recognize red scale bar at bottom-right corner of images
- Calculate conversion factor from pixels to actual microns
- Support custom scale bar length configuration
- Automatic grain detection and segmentation
- Intelligent grain numbering and area labeling
- Support custom labeling styles (font, color, outline, etc.)
The system can calculate 14 grain geometric parameters:
| Category | Parameters | Description |
|---|---|---|
| Basic | area, perimeter, centroid_x/y, width, height |
Fundamental measurements |
| Shape | circularity, aspect_ratio, rectangularity |
Overall shape characteristics |
| Structural | compactness, roundness, convexity |
Surface complexity measures |
| Advanced | fractal_dimension, angularity |
Complexity and corner count |
| Zingg | EI_2d, FI_2d, AR_2d |
2D Zingg shape classification |
| Fourier | D2_2d, D3_2d, D4_2d |
Fourier descriptors (contour frequency) |
All parameters are configured via configs/geometry.yaml.
configs/fastsam.yaml/configs/mobilesam.yaml: Main configuration filesconfigs/fastsam_smooth.yaml: Smooth edge configuration (reduces jagged edges)configs/fastsam_ultra_smooth.yaml: Maximum smoothing for best edge qualityconfigs/geometry.yaml: Geometric parameter configuration file
Create environment directly from the provided configuration:
# 1. Clone repository
git clone https://github.com/KeranLi/FastMeasure.git
cd FastMeasure
# 2. Create conda environment (CPU version)
conda env create -f envs/environment.yml
# 3. Activate environment
conda activate fastmeasure
# 4. Prepare model files
# Download from Google Drive (see Model Files section)
# Then verify:
python utils/download_models.py
# 5. Run FastMeasure
python run_fastsam.py --input your_image.jpgFor GPU support, edit envs/environment.yml and remove the - cpuonly line before creating the environment:
# Edit envs/environment.yml, remove '- cpuonly' line
conda env create -f envs/environment.yml
conda activate fastmeasureOr create GPU environment manually:
conda create -n fastmeasure python=3.10 pytorch torchvision cudatoolkit=11.8 -c pytorch -c conda-forge
conda activate fastmeasure
pip install -r envs/requirements.txtIf you prefer pip:
# 1. Create conda environment
conda create -n fastmeasure python=3.10 -y
conda activate fastmeasure
# 2. Install dependencies
pip install -r envs/requirements.txt
# 5. Run FastMeasure (after downloading models, see Model Files section)
python run_fastsam.py --input your_image.jpgMobileSAM provides higher precision but requires additional installation from GitHub:
# Install from GitHub
pip install git+https://github.com/ChaoningZhang/MobileSAM.gitOr manual installation:
- Download https://github.com/ChaoningZhang/MobileSAM/archive/refs/heads/master.zip
- Extract and run:
pip install -e .
Note: MobileSAM is optional. FastSAM is recommended for most use cases (faster and easier to install).
Configuration files in envs/ folder:
envs/environment.yml- Conda environment configuration (recommended)envs/requirements.txt- Pip requirements listenvs/env-*.yaml- Additional environment examples (for reference)
# Test installation
python -c "from core.segment_core import create_labeled_image; print('OK')"
# Run FastMeasure
python run_fastsam.py --help-
Create and activate virtual environment (recommend using
conda):conda create -n rockseg python=3.8 conda activate rockseg
-
Install dependencies:
# Install PyTorch 2.3+ (supports NumPy 2.x) pip install torch>=2.3.0 torchvision>=0.18.0 # Install other dependencies pip install opencv-python pandas matplotlib numpy pyyaml ultralytics>=8.2.0 shapely scikit-image pillow # MobileSAM (optional but recommended) # Note: mobile_sam requires timm, which will be installed automatically pip install git+https://github.com/ChaoningZhang/MobileSAM.git
FastMeasure requires pre-trained model files (~700 MB total, not included in repository).
-
Download model files from Google Drive:
- Link: https://drive.google.com/drive/folders/1SPah9woaytIeinkLzQgGiXyj_SCJ3v1q?usp=drive_link
- Files:
best_yolo_20260107.pt,FastSAM-s.pt,mobile_sam.pt
-
Place downloaded files in
models/folder
If you have trained your own models, place them in models/ folder:
| Model | Filename | Size | Required |
|---|---|---|---|
| YOLO Detection | best_yolo_20260107.pt |
~100 MB | β Yes |
| FastSAM | FastSAM-s.pt |
~150 MB | β Yes |
| MobileSAM | mobile_sam.pt |
~450 MB | β No (optional) |
python utils/download_models.pyThis will check if all required model files are present.
After downloading, your models/ folder should look like:
models/
βββ best_yolo_20260107.pt # YOLO detection model
βββ FastSAM-s.pt # FastSAM model
βββ mobile_sam.pt # MobileSAM model (optional)
For jagged edge issues, use optimized configurations:
# Standard smooth (balanced)
python fastsam_interactive.py --config configs/fastsam_smooth.yaml
# Ultra smooth (maximum smoothing, slower)
python fastsam_interactive.py --config configs/fastsam_ultra_smooth.yamlThe project provides a unified entry script run.py as a convenience wrapper for FastSAM and MobileSAM.
Note: run.py simply forwards to run_fastsam.py or run_mobilesam.py - all functionality is identical.
# FastSAM processing (equivalent to: python run_fastsam.py --input image.tif)
python run.py fastsam --input path/to/image.tif
# MobileSAM batch processing (equivalent to: python run_mobilesam.py --input folder/ --batch)
python run.py mobilesam --input path/to/folder --batch
# Interactive mode
python run.py mobilesam --interactive
# Terminal wizard mode (no arguments)
python run.py fastsam
python run.py mobilesamWhen to use which:
- Use
run.pyif you prefer a single entry point for both modes - Use
run_fastsam.py/run_mobilesam.pydirectly for clarity or scripting
python run_fastsam.py --input path/to/image.tif
# Or use unified entry
python run.py fastsam --input path/to/image.tifpython run_fastsam.py --input path/to/folder --batch
# Or use unified entry
python run.py fastsam --input path/to/folder --batchpython run_fastsam.py --config configs/fastsam.yaml --input image.tifpython run_fastsam.py --input image.tif --conf 0.3 --min-area 50 --output my_resultspython run_mobilesam.py
# Or use unified entry
python run.py mobilesamFollow prompts to select processing mode and input parameters.
python run_mobilesam.py --input path/to/image.tif
# Or use unified entry
python run.py mobilesam --input path/to/image.tifpython run_mobilesam.py --input path/to/folder --batch
# Or use unified entry
python run.py mobilesam --input path/to/folder --batchpython run_mobilesam.py --interactive
# Or use unified entry
python run.py mobilesam --interactive| Key/Operation | Function |
|---|---|
| Left click | Add foreground point (segmentation target) |
| Right click | Add background point (exclusion area) |
x |
Delete last grain |
d |
Delete all grains |
s |
Show save options menu |
S (Shift+S) |
Quick save complete results |
c |
Clear all point marks |
r |
Reset interface |
m |
Manual scale calibration (measure known length) |
h |
Show help |
q |
Quit |
Press s (lowercase): Displays save options menu:
- Quick save complete results - Same as Shift+S
- Custom save path - Choose directory via file dialog
- Cancel - Return to segmentation
Press S (Shift+s): Immediately saves all results without prompting:
- Saves to
results/[mode]/interactive/[timestamp]/ - Includes all output files (see Output File Guide)
- Best for quick saving during segmentation
When automatic scale bar detection fails, you can manually calibrate the scale:
- Press
mkey to enter scale calibration mode - Click the start point of a known-length line (e.g., scale bar, ruler)
- Click the end point of the line
- Enter the actual length in microns in the input box at the bottom of the image
- Click OK or press Enter to confirm
- The system will calculate and store the scale factor (um/px)
Note: The scale calibration input is now embedded in the matplotlib window (PyInstaller compatible) instead of a popup dialog.
This allows you to use any known-length feature in the image for calibration.
The system is fully compatible with macOS. However, please note:
- First run may be slower due to model loading
- Interactive mode requires a display (not supported on remote SSH without X11)
- File dialogs run on main thread to ensure macOS compatibility
FastMeasure includes a YOLO Fine-tuning Module (utils/train_yolo.py) that allows you to improve detection accuracy on your specific rock types, similar to segmenteverygrain's U-Net fine-tuning capability.
- Better Accuracy: YOLO models trained on generic datasets may miss specific grain types in your samples
- Adapt to New Rock Types: Fine-tune on your own thin-section images for best results
- Iterative Improvement: Use interactive mode results as training data
The easiest way to fine-tune is using your interactive segmentation results:
# Step 1: Generate some training data using interactive mode
python run.py mobilesam --interactive
# Segment several images and save the results
# Step 2: Fine-tune YOLO using those results
python utils/train_yolo.py --mode quick --input results/mobilesam/interactive/ --epochs 50
# Step 3: Use the fine-tuned model
cp training_outputs/runs/train_*/weights/best.pt ./models/my_finetuned_yolo.pt
# Update configs/fastsam.yaml: yolo: "./models/my_finetuned_yolo.pt"| Option | Description | Default |
|---|---|---|
--mode |
Training mode: quick (from interactive results) or train (from dataset) |
quick |
--input |
Directory containing interactive mode results | Required for quick |
--data |
Path to YOLO-format dataset YAML | Required for train |
--base |
Base model: yolov8n/s/m/l/x.pt or path to .pt file |
yolov8n.pt |
--epochs |
Number of training epochs | 50 |
--imgsz |
Input image size | 1024 |
--batch |
Batch size (reduce if out of memory) | 8 |
--device |
Device: auto, cpu, cuda, mps |
auto |
# Fine-tune from existing model with more epochs
python utils/train_yolo.py --mode quick \
--input results/mobilesam/interactive/ \
--base ./models/best_yolo_20260107.pt \
--epochs 100 \
--imgsz 1024
# Use larger model for better accuracy (slower)
python utils/train_yolo.py --mode quick \
--input results/interactive/ \
--base yolov8m.pt \
--epochs 50
# Train with custom YOLO-format dataset
python utils/train_yolo.py --mode train \
--data ./my_grain_dataset/dataset.yaml \
--epochs 200If you have existing annotations, you can create a YOLO-format dataset:
dataset/
βββ images/
β βββ train/
β βββ val/
β βββ test/
βββ labels/
β βββ train/
β βββ val/
β βββ test/
βββ dataset.yaml
dataset.yaml format:
path: /path/to/dataset
train: images/train
val: images/val
test: images/test
nc: 1
names: ['grain']# Model path configuration
model_paths:
yolo: "./models/best_yolo_20260107.pt" # YOLO model path
fastsam: "./models/FastSAM-s.pt" # FastSAM model path
device: "cpu" # Running device: cpu or cuda
# Scale bar detection configuration
scale_detection:
enabled: true
known_length_um: 1000.0 # Scale bar actual length (microns)
# Processing parameter configuration
processing:
yolo_confidence: 0.25 # YOLO detection confidence threshold
min_area: 30 # Minimum grain area (pixels)
remove_edge_grains: false # Whether to remove edge grains
# Output configuration
output:
root_dir: "results" # Result output directory
save_visualization: true # Save visualization results
save_statistics: true # Save CSV statistics file
save_summary: true # Save JSON summaryNote:
configs/fastsam.yamlis used for FastSAM mode (default: CPU)configs/mobilesam.yamlis used for MobileSAM mode (default: CPU)- Change
devicetocudaif you have NVIDIA GPU and CUDA installed
grain_statistics_csv:
enabled: true
# Columns finally written to CSV (output in this order)
keep_columns:
- grain_id
- area
- centroid_x
- centroid_y
- width
- height
- perimeter
- circularity
- aspect_ratio
- compactness
- roundness
- area_um2
- diameter_umNote: The keep_columns list must use actual column names from the DataFrame:
grain_id(notlabel) - Grain identifiercentroid_x,centroid_y- Centroid coordinateswidth,height- Bounding box dimensions- Geometric parameters like
circularity,aspect_ratio, etc. are calculated by the geometry module (requiresscikit-image)
After processing is complete, the system generates the following files in the output directory:
| File Name | Description |
|---|---|
segmentation_result.png |
Segmentation result visualization (with grain contours) |
segmentation_yolo_style.png |
YOLO-style side-by-side comparison (original + colored) |
segmentation_labeled.png |
Labeled result image with grain numbers |
segmentation_mask.png |
Binary segmentation mask image |
grain_statistics.csv |
Grain statistics data table |
summary.json |
Processing summary information (JSON format) |
performance.json |
Performance statistics information |
When using interactive mode and saving (press s or Shift+S), all files above are generated in:
results/[fastsam|mobilesam]/interactive/[image_name]_[timestamp]/
Note on segmentation_labeled.png:
- Generated automatically when saving in interactive mode
- Shows original image with grain number labels
- Uses intelligent label placement to avoid overlap
- Labels include grain ID numbers for easy reference
All results are now organized under a unified results/ directory:
results/
βββ fastsam/ # FastSAM results
β βββ auto/ # Automatic processing results
β β βββ [image_name]/
β β βββ segmentation_result.png
β β βββ segmentation_labeled.png
β β βββ segmentation_mask.png
β β βββ grain_statistics.csv
β β βββ summary.json
β βββ interactive/ # Interactive processing results
β βββ [timestamp]/
β βββ ...
βββ mobilesam/ # MobileSAM results
β βββ auto/ # Automatic processing results
β βββ interactive/ # Interactive processing results
βββ logs/ # Unified log directory
β βββ fastsam/ # FastSAM logs
β βββ mobilesam/ # MobileSAM logs
βββ temp/ # Temporary files and cache
Benefits:
- All results in one place - easy to find and manage
- Clear separation between modes (FastSAM/MobileSAM) and types (auto/interactive)
- Unified logs for easier debugging
- No scattered result folders in project root
.
βββ utils/
β βββ download_models.py # Model file download script
β βββ train_yolo.py # YOLO model training/fine-tuning script
β βββ gui_launcher.py # GUI launcher for desktop app
β βββ file_dialog.py # Cross-platform file dialog utilities
β βββ grain_marker.py # Grain labeling module
β βββ scale_detector.py # Scale bar detection module
βββ run.py # Unified entry script (new)
βββ run_fastsam.py # FastSAM startup script
βββ run_mobilesam.py # MobileSAM startup script (supports interactive mode)
βββ mobilesam_interactive.py # MobileSAM standalone interactive tool
βββ configs/fastsam.yaml # FastSAM configuration file
βββ configs/mobilesam.yaml # MobileSAM configuration file
βββ configs/geometry.yaml # Geometric parameter configuration file
β
βββ core/ # Core module (new)
β βββ __init__.py # Core module initialization
β βββ segment_core.py # Core segmentation functions (migrated from segmenteverygrain)
β βββ seg_tools.py # Shared tool functions
β βββ seg_optimize.py # Shared segmentation optimization
β βββ cli_base.py # Shared CLI functions
β βββ scale_calibration.py # Manual scale calibration (new)
β βββ yolo_trainer.py # YOLO fine-tuning module (new)
β
βββ fastsam/ # FastSAM module
β βββ rock_fastsam_system.py # FastSAM main system
β βββ yolo_fastsam.py # YOLO+FastSAM pipeline
β βββ seg_engine.py # Segmentation engine
β βββ seg_optimize.py # Segmentation optimization (compatibility wrapper)
β βββ seg_tools.py # Tool functions (compatibility wrapper)
β
βββ mobilesam/ # MobileSAM module
β βββ rock_mobilesam_system.py # MobileSAM main system
β βββ yolo_mobilesam.py # YOLO+MobileSAM pipeline
β βββ mobile_sam_engine.py # MobileSAM engine
β βββ seg_optimize.py # Segmentation optimization (compatibility wrapper)
β βββ seg_tools.py # Tool functions (compatibility wrapper)
β
βββ geometry/ # Geometric parameter calculation module
β βββ grain_metric.py # Grain shape parameter calculation
β βββ config_loader.py # Config loader
β βββ export_csv.py # CSV export utility
β
βββ configs/ # Configuration files folder
β βββ fastsam.yaml # FastSAM configuration
β βββ geometry.yaml # Geometric parameters configuration
βββ models/ # Model files directory
βββ results/ # Default output directory
βββ Boulder_20260107/ # Test data example
Performance tests based on RTX 3060 graphics card:
| Model | GPU Inference | CPU Inference | Speed Comparison |
|---|---|---|---|
| FastSAM | ~77ms | ~294ms | CPU is ~4x GPU |
| MobileSAM | ~3.7s | ~101s | CPU is ~26x GPU |
Recommendation: For large batch processing, GPU acceleration is recommended; for small batches or testing, CPU mode can be used.
| Package | Purpose |
|---|---|
torch |
Deep learning framework |
ultralytics |
YOLOv8 and SAM models |
opencv-python |
Image processing and scale bar detection |
pandas |
Data processing and statistics |
matplotlib |
Result visualization |
numpy |
Numerical computation |
pyyaml |
Configuration file parsing |
shapely |
Geometric calculation |
scikit-image |
Image processing tools |
mobile_sam |
MobileSAM library (required for interactive mode) |
Q: What to do if scale bar detection fails?
A: Check if there is a clear red scale bar at the bottom-right corner of the image, or adjust color threshold parameters like red_lower1/red_upper1 in the configuration file.
Q: How to adjust detection sensitivity?
A: Modify the yolo_confidence parameter in the configuration file (smaller values mean more sensitive detection but may introduce noise).
Q: Interactive mode cannot start GUI?
A: Ensure the system has GUI support, or try setting the environment variable MPLBACKEND=TkAgg.
Q: Why isn't geometry.yaml affecting the output columns?
A: Check two things: 1) Ensure scikit-image is installed (pip install scikit-image), and 2) Verify column names in geometry.yaml match actual DataFrame columns (use grain_id, not label).
See CHANGELOG.md for detailed update content of each project version.
Contributions are welcome! If you have improvement suggestions or find issues, you can contribute code by submitting an issue or pull request.
This project builds upon the excellent work of segmenteverygrain by ZoltΓ‘n Sylvester and colleagues. We thank them for pioneering the application of SAM in sedimentary grain segmentation and for making their work open-source.
FastMeasure has migrated core segmentation functionality from segmenteverygrain to core/segment_core.py, making the project fully independent. The segmenteverygrain source code has been removed from this repository but remains available in Git history.
Migrated functions (now in core/segment_core.py):
create_labeled_image()- Create labeled grain masksplot_image_w_colorful_grains()- Visualize grains with colorsplot_grain_axes_and_centroids()- Plot grain orientation axesfind_connected_components()- Detect overlapping grainsmerge_overlapping_polygons()- Merge overlapping segmentationscollect_polygon_from_mask()- Extract polygons from masksload_image()- Image loading utilitiespolygons_to_grains()- Convert polygons to grain objectssave_grains()- Save grain data
To view original segmenteverygrain code:
git show HEAD~1:segmenteverygrain/
# or restore temporarily:
git checkout HEAD~1 -- segmenteverygrain/Key improvements in FastMeasure:
- YOLO-based Detection: Replaced patch-based U-Net with YOLO for real-time grain detection
- Multiple SAM Backends: Support for both FastSAM (speed) and MobileSAM (precision)
- Automatic Scale Detection: Intelligent red scale bar recognition at image corners
- Enhanced Geometric Analysis: 10+ grain shape parameters including fractal dimension
- Unified Architecture: Modular core library with command-line interface
- Cross-Platform: Full macOS and Linux/Windows support
FastMeasure can be packaged as a standalone executable for Windows, allowing users to run it without installing Python.
# Install PyInstaller
pip install pyinstaller# Run the build script
python application/build_exe.pyThis will:
- Clean previous builds
- Package all Python dependencies
- Include model configs and core modules
- Create
dist/FastMeasure/folder with executable
Note: The GUI launcher (utils/gui_launcher.py) is used as the entry point. It directly calls the system classes (not via subprocess), ensuring consistent behavior between the executable and Python scripts.
# Build one-directory (recommended, faster startup)
pyinstaller --name FastMeasure \
--windowed \
--onedir \
--add-data "core;core" \
--add-data "fastsam;fastsam" \
--add-data "mobilesam;mobilesam" \
--add-data "geometry;geometry" \
--add-data "configs;configs" \
--add-data "utils;utils" \
--hidden-import "core" \
--hidden-import "core.seg_tools" \
--hidden-import "core.segment_core" \
--hidden-import "geometry.grain_metric" \
--hidden-import "fastsam.rock_fastsam_system" \
--hidden-import "mobilesam.rock_mobilesam_system" \
--hidden-import "ultralytics" \
--hidden-import "torch" \
utils/gui_launcher.pyAfter building:
dist/
βββ FastMeasure/
βββ FastMeasure.exe # Main executable
βββ models/ # Place model files here
βββ results/ # Output directory
βββ _internal/ # Python libraries
Before distributing:
- Download model files (see Model Files)
- Place in
dist/FastMeasure/models/ - Zip the entire
FastMeasure/folder - Share the zip file with users
If you need a Windows installer (.exe setup file), you can create one using Inno Setup:
- Install Inno Setup
- Create a script file (e.g.,
installer.iss):
[Setup]
AppName=FastMeasure
AppVersion=1.0.0
DefaultDirName={autopf}\FastMeasure
OutputBaseFilename=FastMeasure_Setup
[Files]
Source: "dist\FastMeasure\*"; DestDir: "{app}"; Flags: ignoreversion recursesubdirs
[Icons]
Name: "{autoprograms}\FastMeasure"; Filename: "{app}\FastMeasure.exe"- Build in Inno Setup Compiler to create
FastMeasure_Setup.exe
- Executable size: ~500MB-1GB (includes Python + PyTorch)
- Startup time: First launch may take 10-30 seconds (model loading)
- CPU mode: The executable defaults to CPU mode for compatibility
- Model files: Not included in build (too large), must be downloaded separately