[New Rule] AWS API Activity from Uncommon S3 Client by Rare User#5694
[New Rule] AWS API Activity from Uncommon S3 Client by Rare User#5694
Conversation
Detects AWS API activity originating from the S3 Browser application based on the user agent string. S3 Browser is a Windows-based graphical client for managing S3 buckets that is rarely used in enterprise environments but has been observed in use by threat actors for data exfiltration due to its ease of use and bulk download capabilities. This rule was inspired by the Permiso LUCR-3 research which documented Scattered Spider using S3 Browser (v10.9.9) for data theft operations. No usage captured in alert telemetry and only one user utilized this browser in prod data. Existing Related Coverage: We have several S3-related exfiltration rules covering bucket replication, policy modifications, and ransomware indicators. This new rule closes a gap by detecting a specific attacker tooling fingerprint rather than relying solely on behavioral patterns.
Rule: New - GuidelinesThese guidelines serve as a reminder set of considerations when proposing a new rule. Documentation and Context
Rule Metadata Checks
New BBR Rules
Testing and Validation
|
Co-authored-by: Ruben Groenewoud <78494512+Aegrah@users.noreply.github.com>
This rule detects AWS API activity from S3 Browser and Cyberduck desktop clients based on user agent strings. Both are graphical S3 management tools that provide bulk upload/download capabilities and have been observed in use by threat actors for data exfiltration. S3 Browser usage is specifically documented in the Permiso blog on LUCR-3 (Scattered Spider), while Cyberduck is referenced in the MITRE ATT&CK Threat Emulation of Scattered Spider. The rule uses a New Terms approach on cloud.account.id and user.name to alert only on the first occurrence per user/account, reducing noise from repeated GetObject or PutObject operations while still capturing new suspicious tool usage. No existing rules currently detect activity based on these specific S3 client user agents. This fills a gap in detecting exfiltration tooling commonly used in post-compromise data theft operations.
Mikaayenson
left a comment
There was a problem hiding this comment.
id suggest double checking the window and other clients. otherwise lgtm
|
|
||
| query = ''' | ||
| event.dataset: "aws.cloudtrail" | ||
| and user_agent.original: (*S3*Browser* or *Cyberduck*) |
There was a problem hiding this comment.
| and user_agent.original: (*S3*Browser* or *Cyberduck*) | |
| and user_agent.original: (*S3 Browser* or *Cyberduck*) |
Do we need the wildcard there?
There was a problem hiding this comment.
@Mikaayenson I got some unit test issues when I didn't include the wildcard, in Kibana no we do not need the wildcard though.
There was a problem hiding this comment.
There's some pretty significant drawbacks to having to include the wildcard between S3 and Browser instead of having the space. I'm reverting this one to draft until we can resolve, may consider using EQL string_contains function instead.
cc @eric-forte-elastic do you have an idea for why our unit test would block this for KQL?
|
|
||
| query = ''' | ||
| event.dataset: "aws.cloudtrail" | ||
| and user_agent.original: (*S3*Browser* or *Cyberduck*) |
There was a problem hiding this comment.
What about other s3 clients?
## Summary Fixes KQL parser to support wildcard values containing spaces (e.g., `*S3 Browser*`), which work in Kibana but were rejected by our unit tests. **Issue:** #5750 ## Changes ### Grammar (`lib/kql/kql/kql.g`) - Added `WILDCARD_LITERAL` token with priority 3 to match wildcard patterns containing spaces - Uses negative lookahead to stop before `or`/`and`/`not` keywords - Added to `value` rule (not `literal`) so field names remain unaffected ### Parser (`lib/kql/kql/parser.py`) - Handle new `WILDCARD_LITERAL` token type as wildcards - Quoted strings (`"*text*"`) now treated as literals, matching Kibana behavior ## Behavior | Query | Before | After | |-------|--------|-------| | `field: *S3 Browser*` | ❌ Parse error | ✅ Wildcard | | `field: *test*` | ✅ Wildcard | ✅ Wildcard | | `common.*: value` | ✅ Works | ✅ Works | | `field: "*text*"` | Wildcard | ✅ Literal (matches Kibana) | ## Test plan - [x] All 63 existing KQL unit tests pass - [x] New wildcard-with-spaces patterns parse correctly - [x] Wildcard field names (`common.*`) still work - [x] Keywords (`or`, `and`, `not`) correctly recognized as separators - [x] Tested against rule file from PR #5694
Pull Request
Issue link(s):
Summary - What I changed
This rule detects AWS API activity from S3 Browser and Cyberduck desktop clients based on user agent strings. Both are graphical S3 management tools that provide bulk upload/download capabilities and have been observed in use by threat actors for data exfiltration. S3 Browser usage is specifically documented in the Permiso blog on LUCR-3 Scattered Spider using S3 Browser (v10.9.9) , while Cyberduck is referenced in the MITRE ATT&CK Threat Emulation of Scattered Spider. The rule uses a New Terms approach on
cloud.account.idanduser.nameto alert only on the first occurrence per user/account, reducing noise from repeatedGetObjectorPutObjectoperations while still capturing new suspicious tool usage. Very few instances of either tool usage in prod telemetry or alert telemetry.Existing Related Coverage: We have several S3-related exfiltration rules covering bucket replication, policy modifications, and ransomware indicators. This new rule closes a gap by detecting a specific attacker tooling fingerprint rather than relying solely on behavioral patterns.
How To Test
query screenshot