Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
# Build:
# docker build -t ghcr.io/decodingraphael/unraphael .
# Push to GitHub Container Registry:
# docker push ghcr.io/decodingraphael/unraphael
# Run:
# docker run -p 8501:8501 ghcr.io/decodingraphael/unraphael
FROM python:3.12-slim

RUN pip install torch==2.3.1+cpu torchvision torchaudio 'numpy<2.0' --extra-index-url https://download.pytorch.org/whl/cpu
Expand Down
67 changes: 67 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,3 +75,70 @@ unraphael-dash
Check out our [Contributing Guidelines](CONTRIBUTING.md#Getting-started-with-development) to get started with development.

Suggestions, improvements, and edits are most welcome.

## Self hosted deployment

To run on dashboard with:

```shell
sudo apt-get update && apt-get install libgl1 libglib2.0-0 -y
# As www-data user
python3 -m venv venv
pip install 'unraphael[dash]@git+https://github.com/DecodingRaphael/unraphael.git@0.3'
```

<details>
<summary>Systemd service</summary>

To run unraphael as a service, you can create a systemd service file. This will allow you to start, stop, and restart unraphael using systemd.

1. Create a service file for unraphael, for example `/etc/systemd/system/unraphael.service`:

```
[Unit]
Description=Unraphael dashboard
After=network.target

[Service]
Environment="XDG_CACHE_HOME=/cache/dir" HOME="/writable/dir"
User=youruser
WorkingDirectory=/home/youruser
ExecStart=/home/youruser/.local/bin/unraphael-dash
Restart=on-failure
User=youruser
WorkingDirectory=/home/youruser
ExecStart=/home/youruser/.local/bin/unraphael-dash
Restart=on-failure

[Install]
WantedBy=multi-user.target
```

Replace `/cache/dir` and `/writable/dir` with the actual paths to your cache and writable directories.
Replace `youruser` with your actual username. Also, make sure that the path to `unraphael-dash` is correct. You can find the correct path using `which unraphael-dash`.

2. Enable the service:

```console
sudo systemctl enable unraphael.service
```

3. Start the service:

```console
sudo systemctl start unraphael.service
```

4. Check the status of the service:

```console
sudo systemctl status unraphael.service
```

5. To stop the service:

```console
sudo systemctl stop unraphael.service
```

</details>
2 changes: 1 addition & 1 deletion docs/credits.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Credits and Contact Information

Unraphael is maintained by the [Netherlands eScience Center](https://www.esciencecenter.nl/) in collaboration with the [Department of History and Art History](https://www.uu.nl/en/organisation/department-of-history-and-art-history) at the [University of Utrecht](https://www.uu.nl/). The work was supported through a *Small-Scale Initiatives Digital Approaches to the Humanities* grant
Unraphael is maintained by the [Netherlands eScience Center](https://www.esciencecenter.nl/) in collaboration with the [Department of History and Art History](https://www.uu.nl/en/organisation/department-of-history-and-art-history) at the [University of Utrecht](https://www.uu.nl/). The work was supported through a *Small-Scale Initiatives Digital Approaches to the Humanities* grant

If you have any questions, feedback, or need support, please feel free to reach out to us. Below are the primary contacts and useful links for your reference:

Expand Down
4 changes: 2 additions & 2 deletions docs/steps/analysis.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ This then allows us to answer how similar the areas of the main outlines in two
![Dimensions of paintings](painting_dimensions.png)

- **Upload Photos:** Upload the digital photos of the paintings from a folder on your computer. For now, use the *unaligned photos*, preferably with the background removed.

1. **Overview of Image Sizes and DPI:**
- **Check Image Metrics:** Review the sizes and DPI of the uploaded images. This information helps in converting pixel measurements to physical dimensions.

Expand All @@ -50,4 +50,4 @@ This then allows us to answer how similar the areas of the main outlines in two

5. More detailed information is provided in the table below the heatmap indicating whether the areas of the main figures in the two paintings are similar, given the set tolerance.

![Example of matching ares of main figure in real paintings](img_identical.png)
![Example of matching ares of main figure in real paintings](img_identical.png)
8 changes: 4 additions & 4 deletions docs/steps/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ By following the steps described below, you can effectively group your images ba

### Clustering Methods

- We make use of functionality provided by the [clusteval package](https://erdogant.github.io/clusteval/pages/html/index.html) to derive the optimal number of clusters using silhouette, dbindex, and derivatives in combination with clustering methods, such as agglomerative, kmeans, dbscan and hdbscan.
- We make use of functionality provided by the [clusteval package](https://erdogant.github.io/clusteval/pages/html/index.html) to derive the optimal number of clusters using silhouette, dbindex, and derivatives in combination with clustering methods, such as agglomerative, kmeans, dbscan and hdbscan.
- For more information on the methods or interpreting the results, we highly recommend looking into the [clusteval documentation](https://erdogant.github.io/clusteval/pages/html/index.html).
- Multiple clustering algorithms

Expand Down Expand Up @@ -62,10 +62,10 @@ These aligned images are now prepared for clustering, having been standardized i

### 4 Clustering Images

Two primary clustering approaches are available:
Two primary clustering approaches are available:

- *Outer Contours Clustering*
- *Complete Figures Clustering*.
- *Outer Contours Clustering*
- *Complete Figures Clustering*.

Both of these clustering processes group images based on structural similarities. Unlike semantic clustering, which might group images based on their color and content (e.g., animals, landscapes), structural clustering focuses on patterns, textures, and shapes.

Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ module = ["yaml.*", "toml.*"]
ignore_missing_imports = true

[tool.ruff]
line-length = 96
line-length = 270
target-version = "py310"
extend-include = ["*.ipynb"]

Expand Down
16 changes: 4 additions & 12 deletions src/unraphael/dash/align.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,7 @@
from unraphael.types import ImageType


def detect_and_compute_features(
image_gray: np.ndarray, method: str, maxFeatures: int
) -> Tuple[list, np.ndarray]:
def detect_and_compute_features(image_gray: np.ndarray, method: str, maxFeatures: int) -> Tuple[list, np.ndarray]:
"""Detects and computes features in the image."""
if method == 'SIFT':
feature_detector = cv2.SIFT_create()
Expand Down Expand Up @@ -58,14 +56,10 @@ def compute_homography(matches: list, kpsA: list, kpsB: list, keepPercent: float
return cv2.findHomography(ptsA, ptsB, method=cv2.RANSAC)[0]


def apply_homography(
target: np.ndarray, H: np.ndarray, template_shape: Tuple[int, int, int]
) -> np.ndarray:
def apply_homography(target: np.ndarray, H: np.ndarray, template_shape: Tuple[int, int, int]) -> np.ndarray:
"""Applies the homography matrix to the target image."""
h, w, c = template_shape
return cv2.warpPerspective(
target, H, (w, h), borderMode=cv2.BORDER_CONSTANT, borderValue=(0, 0, 0, 0)
)
return cv2.warpPerspective(target, H, (w, h), borderMode=cv2.BORDER_CONSTANT, borderValue=(0, 0, 0, 0))


def homography_matrix(
Expand Down Expand Up @@ -205,9 +199,7 @@ def feature_align(

# apply the homography matrix to align the images, including the rotation
h, w, c = template.shape
aligned = cv2.warpPerspective(
target, H, (w, h), borderMode=cv2.BORDER_CONSTANT, borderValue=(0, 0, 0, 0)
)
aligned = cv2.warpPerspective(target, H, (w, h), borderMode=cv2.BORDER_CONSTANT, borderValue=(0, 0, 0, 0))

out = image.replace(data=aligned)
out.metrics.update(angle=angle)
Expand Down
20 changes: 5 additions & 15 deletions src/unraphael/dash/equalize.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,7 @@ def normalize_brightness(

# Adjust the L channel (brightness) of the target image based
# on the mean brightness of the template
l_target = (
(l_target * (np.mean(l_template) / np.mean(l_target))).clip(0, 255).astype(np.uint8)
)
l_target = (l_target * (np.mean(l_template) / np.mean(l_target))).clip(0, 255).astype(np.uint8)

# Merge LAB channels back for the adjusted target image
equalized_img_lab = cv2.merge([l_target, a_target, b_target])
Expand Down Expand Up @@ -133,9 +131,7 @@ def normalize_contrast(
std_target = np.std(target_lab[:, :, 0])

# Adjust contrast of target image to match template image
l_target = (
(target_lab[:, :, 0] * (std_template / std_target)).clip(0, 255).astype(np.uint8)
)
l_target = (target_lab[:, :, 0] * (std_template / std_target)).clip(0, 255).astype(np.uint8)
normalized_img_lab = cv2.merge([l_target, target_lab[:, :, 1], target_lab[:, :, 2]])

# Convert the adjusted LAB image back to RGB
Expand Down Expand Up @@ -207,17 +203,11 @@ def normalize_sharpness(
mean_grad_target = np.mean(grad_target)

# Adjust sharpness of target image to match template image
normalized_img = (
(target * (mean_grad_template / mean_grad_target)).clip(0, 255).astype(np.uint8)
)
normalized_img = (target * (mean_grad_template / mean_grad_target)).clip(0, 255).astype(np.uint8)

# Calculate sharpness value for the normalized image
grad_x_normalized = cv2.Sobel(
cv2.cvtColor(normalized_img, cv2.COLOR_RGB2GRAY), cv2.CV_64F, 1, 0, ksize=3
)
grad_y_normalized = cv2.Sobel(
cv2.cvtColor(normalized_img, cv2.COLOR_RGB2GRAY), cv2.CV_64F, 0, 1, ksize=3
)
grad_x_normalized = cv2.Sobel(cv2.cvtColor(normalized_img, cv2.COLOR_RGB2GRAY), cv2.CV_64F, 1, 0, ksize=3)
grad_y_normalized = cv2.Sobel(cv2.cvtColor(normalized_img, cv2.COLOR_RGB2GRAY), cv2.CV_64F, 0, 1, ksize=3)
grad_normalized = cv2.magnitude(grad_x_normalized, grad_y_normalized)
mean_grad_normalized = np.mean(grad_normalized)

Expand Down
12 changes: 4 additions & 8 deletions src/unraphael/dash/home.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,7 @@
menu_items={
'Get Help': 'https://unraphael.readthedocs.io',
'Report a bug': 'https://github.com/DedodingRaphael/unraphael/issues',
'About': (
f'**unraphael**: a dashboard for unraphael ({__version__}). '
'\n\nPython toolkit for *unraveling* image similarity with a focus '
'on artistic practice. '
'\n\nFor more information, see: https://github.com/DedodingRaphael/unraphael'
),
'About': (f'**unraphael**: a dashboard for unraphael ({__version__}). ' '\n\nPython toolkit for *unraveling* image similarity with a focus ' 'on artistic practice. ' '\n\nFor more information, see: https://github.com/DedodingRaphael/unraphael'),
},
)

Expand All @@ -36,7 +31,8 @@

This tool aims to provide new insights into Raphael's working methods through new digital
approaches for the study of artistic practice in art history.
""")
"""
)

# Center-align using Streamlit's layout
col1, col2, col3 = st.columns([1, 2, 1]) # Middle column is wider
Expand Down Expand Up @@ -81,7 +77,7 @@

This project is maintained by the [Netherlands eScience Center](https://www.esciencecenter.nl/) in collaboration with the [Department of History and Art History](https://www.uu.nl/en/organisation/department-of-history-and-art-history) at the University of Utrecht.

**Principal Investigator:** Dr. L. Costiner ([l.costiner@uu.nl](mailto:l.costiner@uu.nl))
**Principal Investigator:** Dr. L. Costiner ([l.costiner@uu.nl](mailto:l.costiner@uu.nl))
**Technical Support:** Thijs Vroegh, Stef Smeets ([t.vroegh@esciencecenter.nl](mailto:t.vroegh@esciencecenter.nl), [s.smeets@esciencecenter.nl](mailto:s.smeets@esciencecenter.nl))

Supported through a *Small-Scale Initiatives Digital Approaches to the Humanities* grant.
Expand Down
Loading