[New Rules] New Terms rules for malicious Python/Pickle model activity on macOS#5780
Open
DefSecSentinel wants to merge 9 commits intomainfrom
Open
[New Rules] New Terms rules for malicious Python/Pickle model activity on macOS#5780DefSecSentinel wants to merge 9 commits intomainfrom
DefSecSentinel wants to merge 9 commits intomainfrom
Conversation
…y on macOS Adds three new_terms SIEM detection rules to close the detection gap identified in ia-trade-team#666 where malicious pickle/PyTorch model files execute arbitrary commands via Python deserialization without triggering existing GenAI-parent-gated endpoint rules. Co-authored-by: Cursor <cursoragent@cursor.com>
|
⛔️ Test failed Results
|
Contributor
Rule: New - GuidelinesThese guidelines serve as a reminder set of considerations when proposing a new rule. Documentation and Context
Rule Metadata Checks
New BBR Rules
Testing and Validation
|
|
⛔️ Test failed Results
|
rules/macos/credential_access_python_sensitive_file_access_first_occurrence.toml
Show resolved
Hide resolved
|
⛔️ Test failed Results
|
|
⛔️ Test failed Results
|
|
⛔️ Test failed Results
|
Samirbous
reviewed
Feb 27, 2026
rules/macos/credential_access_python_sensitive_file_access_first_occurrence.toml
Outdated
Show resolved
Hide resolved
rules/macos/credential_access_python_sensitive_file_access_first_occurrence.toml
Outdated
Show resolved
Hide resolved
- Update descriptions across all three rules to not over-attribute to pickle/PyTorch — these rules detect any malicious Python activity (scripts, compromised dependencies, model deserialization, etc.) - Simplify process.name from explicit enumeration to python* wildcard since KQL matching is case-insensitive - Update investigation guides to reflect broader scope of potential attack vectors Made-with: Cursor
|
⛔️ Test failed Results
|
|
⛔️ Test failed Results
|
|
⛔️ Test failed Results
|
Samirbous
approved these changes
Mar 2, 2026
Aegrah
approved these changes
Mar 4, 2026
|
⛔️ Test failed Results
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds three
new_termsSIEM detection rules for macOS to close the detection gap identified in ia-trade-team#666 where malicious pickle/PyTorch model files execute arbitrary commands via Python's__reduce__deserialization without triggering existing GenAI-parent-gated endpoint rules.-cfor the first time on a host (7-day lookback). Covers reverse shells, reconnaissance, and post-exploitation commands triggered by malicious model loading.openevent action which is pre-filtered to sensitive file paths.logs-endpoint.events.persistence-*index to capture plist metadata (program arguments, run-at-load, keep-alive).Background
PyTorch's
.pt/.pthmodel format uses pickle serialization, which allows arbitrary code execution via__reduce__on deserialization. This is a permanent, architectural flaw — not a patchable bug. Research confirmed that whenpython3directly loads a malicious model (the most realistic data scientist scenario), none of the existing cross-platform GenAI endpoint rules fire because they gate on a GenAI parent process (ollama,cursor,lmstudio, etc.).The
new_termsrule type is ideal here because the individual behaviors (shell spawn, credential access, plist creation) are individually high false-positive for pattern matching, but a first occurrence on a host is a strong signal of compromise.Related
Made with Cursor