Skip to content

KeranLi/FastMeasure

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

135 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

FastMeasure: A cross-platform workflow and software to fast measure the geomtric parameters via deep learning

Keran Lia, Shuhuai Yea, Jintao Daia, Anlin Maa, Wen Laib,+, Xiumian Hua,+

aState Key Laboratory of Critical Earth Material Cycling and Mineral Deposits, Frontiers Science Center for Critical Earth Material Cycling, School of Earth Sciences and Engineering, Nanjing University, Nanjing, 210023, China

bGannan Normal University

+Corresponding authors


Project Overview

FastMeasure is a professional tool for processing rock microscopic images, automatically detecting and segmenting grains. This project is inspired by and builds upon segmenteverygrain by ZoltΓ‘n Sylvester. We appreciate the excellent work done by the segmenteverygrain team in developing a U-Net + SAM based grain segmentation solution for geomorphology and sedimentary geology research. FastMeasure introduces YOLO-based detection, multiple SAM variants, automatic scale detection, and enhanced geometric analysis. Based on deep learning technology, the system supports two model combinations: YOLO+FastSAM and YOLO+MobileSAM, combined with intelligent scale bar detection and rich geometric parameter calculation, enabling precise extraction of grain information from rock microscopic images and generation of complete statistical analysis reports.

While segmenteverygrain pioneered the use of SAM for grain segmentation, FastMeasure takes a different approach and introduces several enhancements:

Feature segmenteverygrain FastMeasure
Detection Model U-Net (patch-based CNN) YOLO (real-time object detection)
SAM Variants SAM 2.1 only FastSAM + MobileSAM
Processing Speed ~2.5 min for 3MP image ~0.3s for FastSAM (GPU)
Scale Calibration Manual (Shift+drag) Automatic + Manual calibration
Geometric Parameters Basic shape metrics 10+ parameters including fractal dimension, angularity
Interactive Mode Jupyter notebook based Standalone GUI with unified key controls
Batch Processing Notebook-based Command-line batch processing
Model Fine-tuning U-Net (TensorFlow) YOLO (Ultralytics, easier)
Training Data Manual annotation Auto from interactive results
Code Structure Notebook + modules Modular core library with CLI

FastSAM vs MobileSAM

Feature FastSAM MobileSAM
Installation Easy (pip install) Requires GitHub access
Speed ⚑ Very Fast (~0.3s GPU) 🐒 Slower (~3.7s GPU)
Precision Good Better
Interactive βœ“ Supported βœ“ Supported
Recommendation Default choice When precision matters

Recommendation: Start with FastSAM (easier installation, faster). Install MobileSAM later if you need higher precision.

Installation Requirements

Minimum requirements (FastSAM only):

  • Python 3.10+
  • PyTorch >= 2.3.0 (supports NumPy 2.x)
  • Ultralytics, OpenCV, NumPy, Pandas
  • ~18 pip packages total

Optional (MobileSAM):

  • MobileSAM from GitHub
  • timm >= 0.9.0 (auto-installed with MobileSAM)

See Installation Guide for detailed instructions.

The system supports three usage modes:

  • Auto Processing Mode: YOLO detection + SAM auto segmentation
  • Batch Processing Mode: Batch processing of all images in a folder
  • Interactive Mode: Manual point selection for fine segmentation via GUI

Model Fine-tuning

Similar to segmenteverygrain's U-Net fine-tuning, FastMeasure supports YOLO model fine-tuning to improve detection accuracy on your specific rock types:

# Quick fine-tune from interactive segmentation results
python utils/train_yolo.py --mode quick --input results/mobilesam/interactive/

# The fine-tuned model can then be used for better detection

See Model Training Guide below for detailed instructions.

Core Features

1. Dual Model Support

Model Combination Features Applicable Scenarios
YOLO + FastSAM Fast, lightweight Large batch quick processing
YOLO + MobileSAM High precision, supports interaction High precision requirements, interactive annotation

2. Scale Bar Detection

  • Automatically recognize red scale bar at bottom-right corner of images
  • Calculate conversion factor from pixels to actual microns
  • Support custom scale bar length configuration

3. Grain Segmentation and Labeling

  • Automatic grain detection and segmentation
  • Intelligent grain numbering and area labeling
  • Support custom labeling styles (font, color, outline, etc.)

4. Geometric Parameter Calculation

The system can calculate 14 grain geometric parameters:

Category Parameters Description
Basic area, perimeter, centroid_x/y, width, height Fundamental measurements
Shape circularity, aspect_ratio, rectangularity Overall shape characteristics
Structural compactness, roundness, convexity Surface complexity measures
Advanced fractal_dimension, angularity Complexity and corner count
Zingg EI_2d, FI_2d, AR_2d 2D Zingg shape classification
Fourier D2_2d, D3_2d, D4_2d Fourier descriptors (contour frequency)

All parameters are configured via configs/geometry.yaml.

5. Flexible Configuration System

  • configs/fastsam.yaml / configs/mobilesam.yaml: Main configuration files
  • configs/fastsam_smooth.yaml: Smooth edge configuration (reduces jagged edges)
  • configs/fastsam_ultra_smooth.yaml: Maximum smoothing for best edge quality
  • configs/geometry.yaml: Geometric parameter configuration file

Installation

Method 1: Conda Environment (Recommended)

Create environment directly from the provided configuration:

# 1. Clone repository
git clone https://github.com/KeranLi/FastMeasure.git
cd FastMeasure

# 2. Create conda environment (CPU version)
conda env create -f envs/environment.yml

# 3. Activate environment
conda activate fastmeasure

# 4. Prepare model files
# Download from Google Drive (see Model Files section)
# Then verify:
python utils/download_models.py

# 5. Run FastMeasure
python run_fastsam.py --input your_image.jpg

GPU Version

For GPU support, edit envs/environment.yml and remove the - cpuonly line before creating the environment:

# Edit envs/environment.yml, remove '- cpuonly' line
conda env create -f envs/environment.yml
conda activate fastmeasure

Or create GPU environment manually:

conda create -n fastmeasure python=3.10 pytorch torchvision cudatoolkit=11.8 -c pytorch -c conda-forge
conda activate fastmeasure
pip install -r envs/requirements.txt

Method 2: Pip Installation

If you prefer pip:

# 1. Create conda environment
conda create -n fastmeasure python=3.10 -y
conda activate fastmeasure

# 2. Install dependencies
pip install -r envs/requirements.txt

# 5. Run FastMeasure (after downloading models, see Model Files section)
python run_fastsam.py --input your_image.jpg

Optional: MobileSAM

MobileSAM provides higher precision but requires additional installation from GitHub:

# Install from GitHub
pip install git+https://github.com/ChaoningZhang/MobileSAM.git

Or manual installation:

  1. Download https://github.com/ChaoningZhang/MobileSAM/archive/refs/heads/master.zip
  2. Extract and run: pip install -e .

Note: MobileSAM is optional. FastSAM is recommended for most use cases (faster and easier to install).

Environment Files

Configuration files in envs/ folder:

  • envs/environment.yml - Conda environment configuration (recommended)
  • envs/requirements.txt - Pip requirements list
  • envs/env-*.yaml - Additional environment examples (for reference)

Verification

# Test installation
python -c "from core.segment_core import create_labeled_image; print('OK')"

# Run FastMeasure
python run_fastsam.py --help
  1. Create and activate virtual environment (recommend using conda):

    conda create -n rockseg python=3.8
    conda activate rockseg
  2. Install dependencies:

    # Install PyTorch 2.3+ (supports NumPy 2.x)
    pip install torch>=2.3.0 torchvision>=0.18.0
    
    # Install other dependencies
    pip install opencv-python pandas matplotlib numpy pyyaml ultralytics>=8.2.0 shapely scikit-image pillow
    
    # MobileSAM (optional but recommended)
    # Note: mobile_sam requires timm, which will be installed automatically
    pip install git+https://github.com/ChaoningZhang/MobileSAM.git

Model Files

FastMeasure requires pre-trained model files (~700 MB total, not included in repository).

Download Model Files

Option 1: Google Drive (Recommended)

  1. Download model files from Google Drive:

  2. Place downloaded files in models/ folder

Option 2: Manual Preparation

If you have trained your own models, place them in models/ folder:

Model Filename Size Required
YOLO Detection best_yolo_20260107.pt ~100 MB βœ“ Yes
FastSAM FastSAM-s.pt ~150 MB βœ“ Yes
MobileSAM mobile_sam.pt ~450 MB βœ— No (optional)

Check Model Files

python utils/download_models.py

This will check if all required model files are present.

Model File Structure

After downloading, your models/ folder should look like:

models/
β”œβ”€β”€ best_yolo_20260107.pt    # YOLO detection model
β”œβ”€β”€ FastSAM-s.pt             # FastSAM model
└── mobile_sam.pt            # MobileSAM model (optional)

Smooth Edge Configurations

For jagged edge issues, use optimized configurations:

# Standard smooth (balanced)
python fastsam_interactive.py --config configs/fastsam_smooth.yaml

# Ultra smooth (maximum smoothing, slower)
python fastsam_interactive.py --config configs/fastsam_ultra_smooth.yaml

Usage Guide

Unified Entry Point (Recommended)

The project provides a unified entry script run.py as a convenience wrapper for FastSAM and MobileSAM.

Note: run.py simply forwards to run_fastsam.py or run_mobilesam.py - all functionality is identical.

# FastSAM processing (equivalent to: python run_fastsam.py --input image.tif)
python run.py fastsam --input path/to/image.tif

# MobileSAM batch processing (equivalent to: python run_mobilesam.py --input folder/ --batch)
python run.py mobilesam --input path/to/folder --batch

# Interactive mode
python run.py mobilesam --interactive

# Terminal wizard mode (no arguments)
python run.py fastsam
python run.py mobilesam

When to use which:

  • Use run.py if you prefer a single entry point for both modes
  • Use run_fastsam.py/run_mobilesam.py directly for clarity or scripting

FastSAM Processing Workflow

1. Process Single Image

python run_fastsam.py --input path/to/image.tif
# Or use unified entry
python run.py fastsam --input path/to/image.tif

2. Batch Process Folder

python run_fastsam.py --input path/to/folder --batch
# Or use unified entry
python run.py fastsam --input path/to/folder --batch

3. Use Custom Configuration

python run_fastsam.py --config configs/fastsam.yaml --input image.tif

4. Adjust Processing Parameters

python run_fastsam.py --input image.tif --conf 0.3 --min-area 50 --output my_results

MobileSAM Processing Workflow

1. Terminal Interactive Mode (Recommended for Beginners)

python run_mobilesam.py
# Or use unified entry
python run.py mobilesam

Follow prompts to select processing mode and input parameters.

2. Process Single Image

python run_mobilesam.py --input path/to/image.tif
# Or use unified entry
python run.py mobilesam --input path/to/image.tif

3. Batch Process Folder

python run_mobilesam.py --input path/to/folder --batch
# Or use unified entry
python run.py mobilesam --input path/to/folder --batch

4. GUI Interactive Segmentation

python run_mobilesam.py --interactive
# Or use unified entry
python run.py mobilesam --interactive

Interactive Mode Operation Guide

Key/Operation Function
Left click Add foreground point (segmentation target)
Right click Add background point (exclusion area)
x Delete last grain
d Delete all grains
s Show save options menu
S (Shift+S) Quick save complete results
c Clear all point marks
r Reset interface
m Manual scale calibration (measure known length)
h Show help
q Quit

Save Options

Press s (lowercase): Displays save options menu:

  1. Quick save complete results - Same as Shift+S
  2. Custom save path - Choose directory via file dialog
  3. Cancel - Return to segmentation

Press S (Shift+s): Immediately saves all results without prompting:

  • Saves to results/[mode]/interactive/[timestamp]/
  • Includes all output files (see Output File Guide)
  • Best for quick saving during segmentation

Manual Scale Calibration

When automatic scale bar detection fails, you can manually calibrate the scale:

  1. Press m key to enter scale calibration mode
  2. Click the start point of a known-length line (e.g., scale bar, ruler)
  3. Click the end point of the line
  4. Enter the actual length in microns in the input box at the bottom of the image
  5. Click OK or press Enter to confirm
  6. The system will calculate and store the scale factor (um/px)

Note: The scale calibration input is now embedded in the matplotlib window (PyInstaller compatible) instead of a popup dialog.

This allows you to use any known-length feature in the image for calibration.

Platform Notes

macOS Compatibility

The system is fully compatible with macOS. However, please note:

  • First run may be slower due to model loading
  • Interactive mode requires a display (not supported on remote SSH without X11)
  • File dialogs run on main thread to ensure macOS compatibility

Model Training

FastMeasure includes a YOLO Fine-tuning Module (utils/train_yolo.py) that allows you to improve detection accuracy on your specific rock types, similar to segmenteverygrain's U-Net fine-tuning capability.

Why Fine-tune?

  • Better Accuracy: YOLO models trained on generic datasets may miss specific grain types in your samples
  • Adapt to New Rock Types: Fine-tune on your own thin-section images for best results
  • Iterative Improvement: Use interactive mode results as training data

Quick Start

The easiest way to fine-tune is using your interactive segmentation results:

# Step 1: Generate some training data using interactive mode
python run.py mobilesam --interactive
# Segment several images and save the results

# Step 2: Fine-tune YOLO using those results
python utils/train_yolo.py --mode quick --input results/mobilesam/interactive/ --epochs 50

# Step 3: Use the fine-tuned model
cp training_outputs/runs/train_*/weights/best.pt ./models/my_finetuned_yolo.pt
# Update configs/fastsam.yaml: yolo: "./models/my_finetuned_yolo.pt"

Training Options

Option Description Default
--mode Training mode: quick (from interactive results) or train (from dataset) quick
--input Directory containing interactive mode results Required for quick
--data Path to YOLO-format dataset YAML Required for train
--base Base model: yolov8n/s/m/l/x.pt or path to .pt file yolov8n.pt
--epochs Number of training epochs 50
--imgsz Input image size 1024
--batch Batch size (reduce if out of memory) 8
--device Device: auto, cpu, cuda, mps auto

Advanced Usage

# Fine-tune from existing model with more epochs
python utils/train_yolo.py --mode quick \
                     --input results/mobilesam/interactive/ \
                     --base ./models/best_yolo_20260107.pt \
                     --epochs 100 \
                     --imgsz 1024

# Use larger model for better accuracy (slower)
python utils/train_yolo.py --mode quick \
                     --input results/interactive/ \
                     --base yolov8m.pt \
                     --epochs 50

# Train with custom YOLO-format dataset
python utils/train_yolo.py --mode train \
                     --data ./my_grain_dataset/dataset.yaml \
                     --epochs 200

Dataset Format (for Custom Training)

If you have existing annotations, you can create a YOLO-format dataset:

dataset/
β”œβ”€β”€ images/
β”‚   β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ val/
β”‚   └── test/
β”œβ”€β”€ labels/
β”‚   β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ val/
β”‚   └── test/
└── dataset.yaml

dataset.yaml format:

path: /path/to/dataset
train: images/train
val: images/val
test: images/test
nc: 1
names: ['grain']

Configuration File Guide

Main Configuration File (configs/fastsam.yaml / configs/mobilesam.yaml)

# Model path configuration
model_paths:
  yolo: "./models/best_yolo_20260107.pt"    # YOLO model path
  fastsam: "./models/FastSAM-s.pt"          # FastSAM model path
  device: "cpu"                              # Running device: cpu or cuda

# Scale bar detection configuration
scale_detection:
  enabled: true
  known_length_um: 1000.0                    # Scale bar actual length (microns)

# Processing parameter configuration
processing:
  yolo_confidence: 0.25                      # YOLO detection confidence threshold
  min_area: 30                               # Minimum grain area (pixels)
  remove_edge_grains: false                  # Whether to remove edge grains

# Output configuration
output:
  root_dir: "results"                        # Result output directory
  save_visualization: true                   # Save visualization results
  save_statistics: true                      # Save CSV statistics file
  save_summary: true                         # Save JSON summary

Note:

  • configs/fastsam.yaml is used for FastSAM mode (default: CPU)
  • configs/mobilesam.yaml is used for MobileSAM mode (default: CPU)
  • Change device to cuda if you have NVIDIA GPU and CUDA installed

Geometric Parameter Configuration File (configs/geometry.yaml)

grain_statistics_csv:
  enabled: true
  # Columns finally written to CSV (output in this order)
  keep_columns:
    - grain_id
    - area
    - centroid_x
    - centroid_y
    - width
    - height
    - perimeter
    - circularity
    - aspect_ratio
    - compactness
    - roundness
    - area_um2
    - diameter_um

Note: The keep_columns list must use actual column names from the DataFrame:

  • grain_id (not label) - Grain identifier
  • centroid_x, centroid_y - Centroid coordinates
  • width, height - Bounding box dimensions
  • Geometric parameters like circularity, aspect_ratio, etc. are calculated by the geometry module (requires scikit-image)

Output File Guide

After processing is complete, the system generates the following files in the output directory:

File Name Description
segmentation_result.png Segmentation result visualization (with grain contours)
segmentation_yolo_style.png YOLO-style side-by-side comparison (original + colored)
segmentation_labeled.png Labeled result image with grain numbers
segmentation_mask.png Binary segmentation mask image
grain_statistics.csv Grain statistics data table
summary.json Processing summary information (JSON format)
performance.json Performance statistics information

Interactive Mode Output

When using interactive mode and saving (press s or Shift+S), all files above are generated in:

results/[fastsam|mobilesam]/interactive/[image_name]_[timestamp]/

Note on segmentation_labeled.png:

  • Generated automatically when saving in interactive mode
  • Shows original image with grain number labels
  • Uses intelligent label placement to avoid overlap
  • Labels include grain ID numbers for easy reference

Unified Output Directory Structure

All results are now organized under a unified results/ directory:

results/
β”œβ”€β”€ fastsam/                    # FastSAM results
β”‚   β”œβ”€β”€ auto/                   # Automatic processing results
β”‚   β”‚   └── [image_name]/
β”‚   β”‚       β”œβ”€β”€ segmentation_result.png
β”‚   β”‚       β”œβ”€β”€ segmentation_labeled.png
β”‚   β”‚       β”œβ”€β”€ segmentation_mask.png
β”‚   β”‚       β”œβ”€β”€ grain_statistics.csv
β”‚   β”‚       └── summary.json
β”‚   └── interactive/            # Interactive processing results
β”‚       └── [timestamp]/
β”‚           └── ...
β”œβ”€β”€ mobilesam/                  # MobileSAM results
β”‚   β”œβ”€β”€ auto/                   # Automatic processing results
β”‚   └── interactive/            # Interactive processing results
β”œβ”€β”€ logs/                       # Unified log directory
β”‚   β”œβ”€β”€ fastsam/                # FastSAM logs
β”‚   └── mobilesam/              # MobileSAM logs
└── temp/                       # Temporary files and cache

Benefits:

  • All results in one place - easy to find and manage
  • Clear separation between modes (FastSAM/MobileSAM) and types (auto/interactive)
  • Unified logs for easier debugging
  • No scattered result folders in project root

Project Structure

.
β”œβ”€β”€ utils/
β”‚   β”œβ”€β”€ download_models.py      # Model file download script
β”‚   β”œβ”€β”€ train_yolo.py           # YOLO model training/fine-tuning script
β”‚   β”œβ”€β”€ gui_launcher.py         # GUI launcher for desktop app
β”‚   β”œβ”€β”€ file_dialog.py          # Cross-platform file dialog utilities
β”‚   β”œβ”€β”€ grain_marker.py         # Grain labeling module
β”‚   └── scale_detector.py       # Scale bar detection module
β”œβ”€β”€ run.py                      # Unified entry script (new)
β”œβ”€β”€ run_fastsam.py              # FastSAM startup script
β”œβ”€β”€ run_mobilesam.py            # MobileSAM startup script (supports interactive mode)
β”œβ”€β”€ mobilesam_interactive.py    # MobileSAM standalone interactive tool
β”œβ”€β”€ configs/fastsam.yaml            # FastSAM configuration file
β”œβ”€β”€ configs/mobilesam.yaml      # MobileSAM configuration file
β”œβ”€β”€ configs/geometry.yaml       # Geometric parameter configuration file
β”‚
β”œβ”€β”€ core/                       # Core module (new)
β”‚   β”œβ”€β”€ __init__.py             # Core module initialization
β”‚   β”œβ”€β”€ segment_core.py         # Core segmentation functions (migrated from segmenteverygrain)
β”‚   β”œβ”€β”€ seg_tools.py            # Shared tool functions
β”‚   β”œβ”€β”€ seg_optimize.py         # Shared segmentation optimization
β”‚   β”œβ”€β”€ cli_base.py             # Shared CLI functions
β”‚   β”œβ”€β”€ scale_calibration.py    # Manual scale calibration (new)
β”‚   └── yolo_trainer.py         # YOLO fine-tuning module (new)
β”‚
β”œβ”€β”€ fastsam/                    # FastSAM module
β”‚   β”œβ”€β”€ rock_fastsam_system.py  # FastSAM main system
β”‚   β”œβ”€β”€ yolo_fastsam.py         # YOLO+FastSAM pipeline
β”‚   β”œβ”€β”€ seg_engine.py           # Segmentation engine
β”‚   β”œβ”€β”€ seg_optimize.py         # Segmentation optimization (compatibility wrapper)
β”‚   └── seg_tools.py            # Tool functions (compatibility wrapper)
β”‚
β”œβ”€β”€ mobilesam/                  # MobileSAM module
β”‚   β”œβ”€β”€ rock_mobilesam_system.py  # MobileSAM main system
β”‚   β”œβ”€β”€ yolo_mobilesam.py         # YOLO+MobileSAM pipeline
β”‚   β”œβ”€β”€ mobile_sam_engine.py      # MobileSAM engine
β”‚   β”œβ”€β”€ seg_optimize.py           # Segmentation optimization (compatibility wrapper)
β”‚   └── seg_tools.py              # Tool functions (compatibility wrapper)
β”‚
β”œβ”€β”€ geometry/                   # Geometric parameter calculation module
β”‚   β”œβ”€β”€ grain_metric.py         # Grain shape parameter calculation
β”‚   β”œβ”€β”€ config_loader.py        # Config loader
β”‚   └── export_csv.py           # CSV export utility
β”‚
β”œβ”€β”€ configs/                    # Configuration files folder
β”‚   β”œβ”€β”€ fastsam.yaml            # FastSAM configuration
β”‚   └── geometry.yaml           # Geometric parameters configuration
β”œβ”€β”€ models/                     # Model files directory
β”œβ”€β”€ results/                    # Default output directory
└── Boulder_20260107/           # Test data example

Performance Reference

Performance tests based on RTX 3060 graphics card:

Model GPU Inference CPU Inference Speed Comparison
FastSAM ~77ms ~294ms CPU is ~4x GPU
MobileSAM ~3.7s ~101s CPU is ~26x GPU

Recommendation: For large batch processing, GPU acceleration is recommended; for small batches or testing, CPU mode can be used.

Dependencies

Package Purpose
torch Deep learning framework
ultralytics YOLOv8 and SAM models
opencv-python Image processing and scale bar detection
pandas Data processing and statistics
matplotlib Result visualization
numpy Numerical computation
pyyaml Configuration file parsing
shapely Geometric calculation
scikit-image Image processing tools
mobile_sam MobileSAM library (required for interactive mode)

FAQ

Q: What to do if scale bar detection fails?
A: Check if there is a clear red scale bar at the bottom-right corner of the image, or adjust color threshold parameters like red_lower1/red_upper1 in the configuration file.

Q: How to adjust detection sensitivity?
A: Modify the yolo_confidence parameter in the configuration file (smaller values mean more sensitive detection but may introduce noise).

Q: Interactive mode cannot start GUI?
A: Ensure the system has GUI support, or try setting the environment variable MPLBACKEND=TkAgg.

Q: Why isn't geometry.yaml affecting the output columns?
A: Check two things: 1) Ensure scikit-image is installed (pip install scikit-image), and 2) Verify column names in geometry.yaml match actual DataFrame columns (use grain_id, not label).

Change Log

See CHANGELOG.md for detailed update content of each project version.

Contributing

Contributions are welcome! If you have improvement suggestions or find issues, you can contribute code by submitting an issue or pull request.

Acknowledgments

This project builds upon the excellent work of segmenteverygrain by ZoltΓ‘n Sylvester and colleagues. We thank them for pioneering the application of SAM in sedimentary grain segmentation and for making their work open-source.

Code Migration Notice

FastMeasure has migrated core segmentation functionality from segmenteverygrain to core/segment_core.py, making the project fully independent. The segmenteverygrain source code has been removed from this repository but remains available in Git history.

Migrated functions (now in core/segment_core.py):

  • create_labeled_image() - Create labeled grain masks
  • plot_image_w_colorful_grains() - Visualize grains with colors
  • plot_grain_axes_and_centroids() - Plot grain orientation axes
  • find_connected_components() - Detect overlapping grains
  • merge_overlapping_polygons() - Merge overlapping segmentations
  • collect_polygon_from_mask() - Extract polygons from masks
  • load_image() - Image loading utilities
  • polygons_to_grains() - Convert polygons to grain objects
  • save_grains() - Save grain data

To view original segmenteverygrain code:

git show HEAD~1:segmenteverygrain/
# or restore temporarily:
git checkout HEAD~1 -- segmenteverygrain/

Key improvements in FastMeasure:

  • YOLO-based Detection: Replaced patch-based U-Net with YOLO for real-time grain detection
  • Multiple SAM Backends: Support for both FastSAM (speed) and MobileSAM (precision)
  • Automatic Scale Detection: Intelligent red scale bar recognition at image corners
  • Enhanced Geometric Analysis: 10+ grain shape parameters including fractal dimension
  • Unified Architecture: Modular core library with command-line interface
  • Cross-Platform: Full macOS and Linux/Windows support

Building Standalone Executable

FastMeasure can be packaged as a standalone executable for Windows, allowing users to run it without installing Python.

Prerequisites

# Install PyInstaller
pip install pyinstaller

Build Instructions

Method 1: Using Build Script (Recommended)

# Run the build script
python application/build_exe.py

This will:

  1. Clean previous builds
  2. Package all Python dependencies
  3. Include model configs and core modules
  4. Create dist/FastMeasure/ folder with executable

Note: The GUI launcher (utils/gui_launcher.py) is used as the entry point. It directly calls the system classes (not via subprocess), ensuring consistent behavior between the executable and Python scripts.

Method 2: Manual Build

# Build one-directory (recommended, faster startup)
pyinstaller --name FastMeasure \
            --windowed \
            --onedir \
            --add-data "core;core" \
            --add-data "fastsam;fastsam" \
            --add-data "mobilesam;mobilesam" \
            --add-data "geometry;geometry" \
            --add-data "configs;configs" \
            --add-data "utils;utils" \
            --hidden-import "core" \
            --hidden-import "core.seg_tools" \
            --hidden-import "core.segment_core" \
            --hidden-import "geometry.grain_metric" \
            --hidden-import "fastsam.rock_fastsam_system" \
            --hidden-import "mobilesam.rock_mobilesam_system" \
            --hidden-import "ultralytics" \
            --hidden-import "torch" \
            utils/gui_launcher.py

Distribution

After building:

dist/
└── FastMeasure/
    β”œβ”€β”€ FastMeasure.exe      # Main executable
    β”œβ”€β”€ models/              # Place model files here
    β”œβ”€β”€ results/             # Output directory
    └── _internal/           # Python libraries

Before distributing:

  1. Download model files (see Model Files)
  2. Place in dist/FastMeasure/models/
  3. Zip the entire FastMeasure/ folder
  4. Share the zip file with users

Creating Windows Installer (Optional)

If you need a Windows installer (.exe setup file), you can create one using Inno Setup:

  1. Install Inno Setup
  2. Create a script file (e.g., installer.iss):
[Setup]
AppName=FastMeasure
AppVersion=1.0.0
DefaultDirName={autopf}\FastMeasure
OutputBaseFilename=FastMeasure_Setup

[Files]
Source: "dist\FastMeasure\*"; DestDir: "{app}"; Flags: ignoreversion recursesubdirs

[Icons]
Name: "{autoprograms}\FastMeasure"; Filename: "{app}\FastMeasure.exe"
  1. Build in Inno Setup Compiler to create FastMeasure_Setup.exe

Notes

  • Executable size: ~500MB-1GB (includes Python + PyTorch)
  • Startup time: First launch may take 10-30 seconds (model loading)
  • CPU mode: The executable defaults to CPU mode for compatibility
  • Model files: Not included in build (too large), must be downloaded separately

License

LICENSE