Last updated: 2026-03-17
This document consolidates all planned work — features, cleanup, tests, docs, and operational tasks — into a single prioritized roadmap. Items are organized into phases for incremental delivery. Each phase should leave the system in a working state.
Cross-references: ADR-004, ADR-005, architecture.md, AGENTS.md
- 9 phases of C++23/Qt6 modernization
- PcapPlusPlus migration (replaced raw libpcap with PcapPlusPlus 25.05)
- ONNX Runtime ML inference (replaces frugally-deep)
- CNN-BiLSTM model trained on LSNM2024 (87.78% accuracy, 97.78% attack recall)
- Native C++ flow feature extraction (77 bidirectional flow features)
- Hybrid detection system (ML + Threat Intelligence + Heuristic Rules)
-
FeatureNormalizer(z-score normalization with clip values) - Threat feed update script (
scripts/ops/update_threat_feeds.sh) - ADR-004 (model benchmark analysis) and ADR-005 (hybrid detection design)
- Rewritten
docs/architecture.mdwith detection philosophy and hybrid data flow - GitHub Actions CI/CD
- CPack packaging (DEB/RPM/TGZ)
- Phase 7 — UI for hybrid detection results (tabbed Packets/Flows view,
FlowTableModel,DetectionDetailWidget, worker thread, TI status panel, weight tuning dialog) - Phase 6 — Cleanup, config, and test foundation (ConfigLoader +
--config, legacyanalysisResults_removed, 130 hybrid detection unit tests, MIT LICENSE, server/client stubs cleaned up) - Phase 8 — Real-time flow extraction and analysis (Welford accumulators, timeout sweeps, streaming flow callbacks, producer-consumer pipeline, live packet API, LiveDetectionPipeline)
- Phase 9.1 — gRPC server (
NidsServiceImplwith 7 RPCs,GrpcStreamSink, proto codegen, Conanwith_grpcoption,nids-serverdual-mode executable) - Phase 9.2 — CLI client (
NidsClientgRPC wrapper,nids-cliwith commands: status, interfaces, capture start/stop, stream with filter) - Docker sandbox for inline IPS testing (3-container topology: server, attacker,
victim on isolated
172.28.0.0/24bridge network) - Phase 9.4 —
--headlessflag on GUI binary (standalone capture + console output, no Qt dependency at runtime when headless) - 15-step improvement plan (C++23 modernization:
[[nodiscard]],std::span,std::expected,std::ranges,FilterBuilder,IAnalysisRepository,ICommandpattern,PacketParserextraction,ProtocolConstants.h, Rule of Five, DIP fixes, CMake per-layer split) - Self-audit fixes (removed dead
InMemoryAnalysisRepository, stale test name renames, deleted unnecessary move ops onCaptureSession) - Deep structural audit (12 of 14 fixes:
PipelineFactory, mergedFlowInfo/FlowMetadata,SignalHandler.h,AsanOptions.h,WelfordAccumulatorextraction,evaluate()2-arg rewrite, protocol-to-string deduplication, magic number cleanup,BoundedQueue→core/concurrent/,PacketFilter→core/model/) - 5-layer deep audit cleanup (128 files, 19,616 lines audited across
core/infra/app/ui+server+client/tests):
- Method extraction: 10 long methods decomposed (SRP) across OnnxAnalyzer, NativeFlowExtractor, LiveDetectionPipeline, FlowAnalysisWorker, AnalysisService, NidsServiceImpl, GrpcStreamSink, DetectionDetailWidget
- DRY fixes: OnnxAnalyzer
interpretOutput(), FilterPanelkStandardApps, WeightTuningDialogsliderToWeight(), NativeFlowExtractorrestartFlow() - Bug fixes: StopCapture flagged count, GetStatus inline loop
- Dead code removal:
PacketInfo::application,FlowTableModel::protocolToString() - Type safety:
ServiceRegistry::getServiceByPort()int→uint16_t - Const-correctness:
IPacketCapture::listInterfaces()const, Configuration getters returnconst&+noexcept - Architecture: PipelineFactory explicit
nids_infraCMake dependency - Test split:
test_NativeFlowExtractor.cpp(2,114 lines) → 6 focused filesPcapTestHelpers.hshared helpers
- Redundant qualifier removal across ~55 files
- WelfordAccumulator: struct→class with private members
- Final grade: A- across all layers (430 tests, zero architecture violations)
Goal: Remove backward-compatibility scaffolding, implement deferred functionality that has zero new dependencies, and establish test coverage for all new hybrid detection code.
ConfigLoaderininfra/config/parses JSON withnlohmann_json- All config sections handled: model, capture, threat_intel, hybrid_detection, ui
main.cpphasparseConfigArg()for--config /path/to/config.jsonCLI arg- Unknown keys silently ignored (partial JSON keeps other defaults)
- Legacy
analysisResults_/getAnalysisResult()/setAnalysisResult()fully removed; all consumers migrated toDetectionResult-based API - Renamed
analysisResultCount()→detectionResultCount()for consistency - Updated all 10 call sites (MainWindow, test_CaptureSession, test_FlowAnalysisWorker)
clip_valueis now required in metadata JSON (error logged if absent)- No default-to-10.0 fallback in
FeatureNormalizer::loadMetadata()
- Stubs already updated: proper Phase 9 references, spdlog logging, no legacy
model paths, behind
NIDS_BUILD_SERVER=OFFoption - Will be rewritten when Phase 9 (gRPC) is implemented
- File already removed; no CMake references exist
All 7 test files implemented with comprehensive coverage:
| Test file | Tests | Coverage |
|---|---|---|
test_ThreatIntelProvider.cpp |
32 | Feed loading, CIDR matching, delimiters, edge cases |
test_HeuristicRuleEngine.cpp |
27 | All 7 rules: trigger/below-threshold/edge values |
test_HybridDetectionService.cpp |
17 | ML-only, TI escalation, ensemble, all detection sources |
test_FeatureNormalizer.cpp |
20 | Load/normalize/clip/mismatch/reload |
test_DetectionResult.cpp |
11 | Struct init, flags, maxSeverity, detectionSourceToString |
test_PredictionResult.cpp |
6 | Struct init, isAttack, isHighConfidence |
test_Configuration.cpp |
17 | Singleton, getters, ConfigLoader with valid/invalid/partial JSON |
- MIT License in project root
- CPack
CPACK_RESOURCE_FILE_LICENSEpoints to${CMAKE_SOURCE_DIR}/LICENSE
Goal: Surface the rich DetectionResult data in the Qt UI so users can see
detection source, confidence scores, TI matches, and heuristic rule matches.
- Tabbed layout: "Packets" tab (existing
PacketTableModel) + "Flows" tab FlowTableModelwith 10 columns (Flow #, Src/Dst IP/Port, Protocol, Verdict, ML Confidence, Combined Score, Detection Source)- Severity color-coding (green/yellow/orange/red)
- Batch and incremental row insertion
DetectionDetailWidgetshown when a flow row is selected- Displays: flow metadata, ML verdict + confidence, probability distribution (16 rows), TI matches (IP, feed name, direction), heuristic rule matches (name, description, severity), combined score breakdown
AnalysisService(QObject) moved to a dedicatedQThreadinMainWindowconstructorrunAnalysis()dispatches viaQMetaObject::invokeMethodwithQt::QueuedConnection- Report prompt deferred to
populateFlowResults()(after analysis finishes) - Thread properly quit/waited in destructor
- Status bar shows "TI: X feeds, Y entries [feed1, feed2, ...] | Rules: N"
IThreatIntelligenceextended withfeedNames()virtual methodMainWindowreceives non-owningIThreatIntelligence*andIRuleEngine*
WeightTuningDialogwith three linked sliders (ML/TI/Heuristic, sum-to-1.0 constraint)- ML confidence threshold slider
- Proportional redistribution: adjusting one slider proportionally adjusts the others
- Apply saves to
HybridDetectionService(runtime) andConfiguration(persistent) - Reset to defaults button
- Accessed via Settings > Detection Weights... menu
Goal: Transform the batch post-capture analysis pipeline into a real-time per-flow detection system. This is the highest-priority performance improvement documented in ADR-004.
- File:
src/infra/flow/NativeFlowExtractor.h - Completed:
std::unordered_mapwithFlowKeyHashfunctor, O(1) amortized lookup
- Files:
NativeFlowExtractor.h,NativeFlowExtractor.cpp,test_NativeFlowExtractor.cpp - Completed:
WelfordAccumulatorstruct with numerically stable online algorithm - Replaced all 12 per-packet vectors with accumulator members (O(1) space per update)
- Per-flow memory reduced from ~7 KB to ~200 B
- Fixed backward IAT double-push bug in
updateDirectionStats() - Removed dead vector-based free functions (
mean,stddev,variance) - Added 5
WelfordAccumulatorunit tests
- Files:
NativeFlowExtractor.h,NativeFlowExtractor.cpp,test_NativeFlowExtractor.cpp - Completed:
sweepExpiredFlows(nowUs)public method iterates active flows and expires any idle beyondflowTimeoutUs_ - Called every 30 seconds (by packet timestamp) during batch pcap processing
- Designed for future live mode: external timer can call
sweepExpiredFlows() - Constructor now reads
flowTimeoutUs_andidleThresholdUs_fromConfiguration::instance()(was hardcoded) - Removed
kDefaultFlowTimeoutUsandkIdleThresholdUslocal constants updateActiveIdle()now accepts idle threshold as parameter- Added 4 sweep-specific unit tests (284 total)
- Files:
IFlowExtractor.h,NativeFlowExtractor.h/.cpp,AnalysisService.cpp,test_NativeFlowExtractor.cpp,test_AnalysisService.cpp,test_Pipeline.cpp - Completed:
FlowCompletionCallbackinIFlowExtractorfires for each completed flow (FIN/RST, max-packets, timeout sweep, end-of-capture) NativeFlowExtractor::completeFlow()andfinalizeBulks()invoke the callback with the 77-float feature vector andFlowInfometadataAnalysisService::analyzeCapture()uses the streaming callback to normalize, predict, and store results as flows complete — no batch accumulation- Backward-compatible batch fallback for extractors that don't invoke the callback
- Added 6 callback unit tests (290 total), updated 2 mock extractors
- All 290 unit + 31 Qt + 24 stress tests pass
- Files:
BoundedQueue.h,FlowAnalysisWorker.h/.cpp,AnalysisService.cpp,test_BoundedQueue.cpp,test_FlowAnalysisWorker.cpp,tests/CMakeLists.txt,src/app/CMakeLists.txt - Completed:
BoundedQueue<T>thread-safe bounded FIFO (blocking push/pop, backpressure, close/end-of-stream semantics) - Completed:
FlowAnalysisWorker—std::jthread-based consumer that popsFlowWorkItemfrom aBoundedQueue, normalizes features, runs ML inference (with optional hybrid detection), stores results inCaptureSession, and invokes aResultCallbackfor UI progress - Completed:
AnalysisService::analyzeCapture()wired to use the pipelined architecture — extraction and inference run concurrently on separate threads with bounded queue backpressure between them - Batch fallback preserved for mock extractors and alternative implementations
- Added 14
BoundedQueue+ 11FlowAnalysisWorkerunit tests (315 total) - All 315 unit + 31 Qt + 24 stress tests pass
- Files:
IFlowExtractor.h,NativeFlowExtractor.h/.cpp,test_NativeFlowExtractor.cpp,test_AnalysisService.cpp,test_Pipeline.cpp - Completed: Added 3 new pure virtual methods to
IFlowExtractorinterface:processPacket(data, length, timestampUs)— feed raw packets during live capturefinalizeAllFlows()— flush remaining active flows at end-of-capturereset()— clear all internal state for a new capture session
- Completed:
NativeFlowExtractorimplements all 3 methods:processPacket()wraps raw bytes inpcpp::RawPacket, delegates to internal parser, includes periodic sweep (same 30s interval as batch mode)finalizeAllFlows()callsfinalizeBulks()to flush pending bulk counters and fire completion callbacks for all remaining active flowsreset()clearsflows_,completedFlows_,flowMetadata_,lastSweepTimeUs_extractFeatures()refactored to callreset()at start andprocessPacketInternal()internally (shared code path with live mode)
- Feature parity: live mode produces identical feature vectors to batch mode
(verified by
ProcessPacket_featureVectorMatchesBatchModetest) - Updated 2 mock extractors (AnalysisService, Pipeline tests) with no-op overrides
- Added 14 live mode unit tests (329 total)
- All 329 unit + 31 Qt + 24 stress tests pass
- Files:
IPacketCapture.h,PcapCapture.h/.cpp,CaptureController.h/.cpp,LiveDetectionPipeline.h/.cpp,FlowAnalysisWorker.h/.cpp,AnalysisService.cpp,MainWindow.cpp,main.cpp,src/app/CMakeLists.txt,test_CaptureController.cpp,test_Pipeline.cpp,test_FlowAnalysisWorker.cpp - Completed:
RawPacketCallbackonIPacketCaptureinterface — fires on the capture thread with raw packet bytes + timestamp for live flow extraction - Completed:
PcapCaptureWorkerfires the callback before parsingPacketInfo, thread-safe set/read via mutex - Completed:
LiveDetectionPipeline(new,app/) — pure C++23 orchestrator:- Manages
BoundedQueue<FlowWorkItem>+FlowAnalysisWorkerlifecycle feedPacket()delegates toIFlowExtractor::processPacket()- Uses
tryPush()(non-blocking) to avoid stalling PcapPlusPlus thread; drops flows under backpressure with logged warning start()resets extractor, creates queue + workerstop()finalizes remaining flows, drains queue, joins worker
- Manages
- Completed:
CaptureControllergainsenableLiveDetection()/disableLiveDetection()- On
startCapture(): starts pipeline, registers raw packet callback - On
stopCapture(): clears callback, finalizes + stops pipeline liveFlowDetected(DetectionResult, FlowInfo)signal bridges worker thread → main thread viaQMetaObject::invokeMethod
- On
- Completed:
FlowAnalysisWorker::ResultCallbackextended to passFlowInfo - Completed:
main.cppcreates separateNativeFlowExtractor,FeatureNormalizer, andIPacketAnalyzerinstances for the live pipeline (no shared mutable state withAnalysisService) - Completed:
MainWindowconnectsliveFlowDetected→FlowTableModel::addFlowResult()for incremental row insertion during capture; skips post-capture analysis prompt when live detection was active - Updated 2 mock
IPacketCaptureimplementations + 1FlowAnalysisWorkertest - Thread model: PcapPlusPlus thread →
feedPacket()→ flow extractor →BoundedQueue→FlowAnalysisWorker(std::jthread) →ResultCallback→QMetaObject::invokeMethod→ main thread →FlowTableModel - All 329 unit + 31 Qt + 24 stress tests pass
Goal: Enable headless operation for server deployments, systemd services, and remote monitoring.
- Files:
src/server/NidsServer.h/.cpp,src/server/server_main.cpp - Proto:
proto/nids.proto(7 RPCs: ListInterfaces, StartCapture, StopCapture, GetStatus, StreamDetections, StreamPackets, AnalyzeCapture) - CMake:
if(NIDS_BUILD_SERVER)block with protobuf/gRPC code generation,nids_protostatic library,nids-server-lib,nids-serverexecutable - Conan:
with_grpcoption (conan install . -o with_grpc=True) pullsgrpc/1.72.0,protobuf/5.27.0,abseil,c-ares,openssl,re2,zlib NidsServiceImpl: full implementations for ListInterfaces, StartCapture, StopCapture, GetStatus, StreamDetections; stubs for StreamPackets, AnalyzeCaptureGrpcStreamSink: implementsIOutputSinkto bridge flow detections into gRPC server-streaming responsesNidsServer: wrapper managinggrpc::Serverlifecycle (start/stop/blocking wait)- Dual-mode
server_main.cpp:--no-grpcfor standalone capture + console output, default for gRPC server mode with full pipeline integration - Generated proto headers marked as
SYSTEMinclude to avoid-Werrorwith GCC 15 - ASan
allow_user_poisoning=0override for known gRPC epoll false positives
- Files:
src/client/NidsClient.h/.cpp,src/client/cli_main.cpp NidsClient: typed C++ wrapper around gRPC stub with connect/disconnect, listInterfaces, startCapture, stopCapture, getStatus, streamDetectionsClientConfig: server address (defaultlocalhost:50051), 5s connect timeout, 30s per-RPC timeoutnids-clicommands:status,interfaces,capture start <iface> [--bpf],capture stop [session-id],stream [--filter flagged|clean|all],help- Clean error handling: graceful 5s timeout on connection failure, proper exit codes
- Signal handling: Ctrl+C stops stream command gracefully
- Files:
docker/sandbox/Dockerfile.server,docker/sandbox/Dockerfile.attacker,docker/sandbox/Dockerfile.victim,docker/sandbox/compose.yml,docker/sandbox/scripts/victim-start.sh,docker/sandbox/scripts/generate-benign.sh,docker/sandbox/scripts/generate-attacks.sh - 3-container topology on isolated
172.28.0.0/24bridge:nids-server(172.28.0.10) — two-stage build, compiles from source withNIDS_BUILD_SERVER=ONattacker(172.28.0.20) — Ubuntu 24.04 with hping3, nmap, scapy, curl, ab, iperf3victim(172.28.0.30) — Python HTTP server, dropbear SSH, iperf3, netcat
- Attack scripts generate 8 attack types matching NIDS model classes
- Benign scripts generate HTTP, ping, iperf3, TCP connect patterns
- Files:
src/main.cpp --headless --interface <iface>skips Qt initialization entirely, runs standalone capture withLiveDetectionPipeline+ConsoleAlertSink- Also added
--bpf,--help/-hflags to the GUI binary - Requires
--interfacein headless mode (validated with error message) - Graceful shutdown via SIGINT/SIGTERM
- Note: for gRPC server mode, use the separate
nids-serverbinary instead
- Files:
src/main.cpp,src/server/server_main.cpp - Parse
--config /path/to/config.jsonfromargv, pass toConfiguration::loadFromFile()viaConfigLoader - Both GUI (
main.cpp) and server (server_main.cpp) support this flag
Goal: Improve ML accuracy and detection coverage based on ADR-004 analysis.
- DDoS-ICMP + ICMP-Flood → single "ICMP Flood/DDoS" class
- Evaluate merging DoS + RCE if operational distinction is not needed
- Retrain model, update
AttackType.h,attackTypeToString() - See ADR-004 and
docs/architecture.md:94
- Train baseline models on the same 77 flow features
- Compare accuracy, inference time, model size
- Document results in ADR-004
- Post-hoc calibration of ML confidence scores
- Train calibration parameter on validation set
- Apply in
OnnxAnalyzer::predictWithConfidence()or as a separate step
- Currently
ThreatIntelProvider::loadDirectory()runs synchronously at startup - Move to async loading with a ready signal
- Required for real-time mode where startup latency matters
- Evaluate: AlienVault OTX, AbuseIPDB, FireHOL Level 1/2/3
- Add feed-specific parsers to
ThreatIntelProvider
- Evaluate feasibility for encrypted traffic metadata analysis
- Would require TLS handshake parsing in
NativeFlowExtractor - See ADR-005
- Create
Doxyfileindocs/ - Ensure all public APIs in
core/andapp/have/** ... */documentation - Generate HTML docs, optionally host via GitHub Pages
- Add ADR-004 and ADR-005 to the Architecture Decision Records section
- Add hybrid detection to the roadmap checklist
- Update feature list
- Add coverage gate in CI (fail build if
core/+app/< 80%) - Track with SonarCloud quality gate
Goal: Forward detection alerts to SIEM, HIDS, and log management infrastructure.
See detailed spec for full component designs.
- 12.1 —
SyslogSink(RFC 5424 over UDP/TCP/TLS) - 12.2 —
CefFormatter(ArcSight Common Event Format) - 12.3 —
LeefFormatter(IBM QRadar LEEF) - 12.4 —
WazuhApiSink(Wazuh manager REST API) - 12.5 —
JsonFileSink(JSON-lines file output with rotation) - 12.6 —
SinkChain(fan-out to multiple sinks) - 12.7 —
OutputSinkFactory(create sinks from config)
Goal: Proactive retroactive search for threats in historical network traffic.
See detailed spec for full component designs.
- 13.1 —
PcapRingBuffer(rolling PCAP storage with retention policies) - 13.2 —
SqliteFlowIndex(flow metadata database for historical queries) - 13.3 —
HuntEngine(retroactive analysis, IOC search, correlation, timeline) - 13.4 —
StatisticalBaseline(traffic pattern baselining + anomaly detection) - 13.5 — gRPC hunt RPCs + CLI
nids-cli huntcommands
Dependencies: SQLite3 or DuckDB
Goal: Content/pattern scanning for malware signatures, C2 beacons, and exploit payloads.
See detailed spec for full component designs.
- 14.1 —
IContentScannerinterface +ContentMatchmodel - 14.2 —
YaraScanner(libyara RAII wrapper) - 14.3 —
TcpReassembler(PcapPlusPlus TCP stream reassembly) - 14.4 — Pipeline integration (per-packet + per-stream scanning)
- 14.5 —
HybridDetectionService5-layer evaluation with YARA - 14.6 — Bundled YARA rules (C2, exploits, tools) + hot reload
Dependencies: libyara 4.x
Goal: Per-packet signature matching with Snort 3.x rule syntax.
See detailed spec for full component designs.
- 15.1 —
ISignatureEngineinterface +SignatureMatch+SnortRulemodels - 15.2 —
SnortRuleParser(parse Snort rule syntax into AST) - 15.3 —
ContentMatcher(Aho-Corasick multi-pattern search) - 15.4 —
PcreEngine(PCRE2 regex wrapper) - 15.5 —
FlowStateTracker(TCP connection state forflow:option) - 15.6 —
FlowbitsManager(cross-rule stateful correlation) - 15.7 —
RuleVariableStore($HOME_NET,$EXTERNAL_NETresolution) - 15.8 —
SnortRuleEngine(main orchestrator with port-group pre-filter) - 15.9 — Pipeline integration + ET Open ruleset testing
Dependencies: PCRE2, optionally Hyperscan (Intel x86_64)
Goal: Active inline prevention — dual-NIC bridge with per-packet forward/drop.
See detailed spec for full component designs.
Requires Phase 15 (Snort rules for per-packet verdicts).
- 16.1 —
PacketVerdict+IInlineCaptureinterface - 16.2 —
NfqueueCapture(Netfilter Queue inline, simpler path) - 16.3 —
AfPacketCapture(AF_PACKET v3, high-performance) - 16.4 —
VerdictEngine(combine TI + signatures + ML into per-packet verdict) - 16.5 —
NetfilterBlocker(dynamic iptables/nftables for ML-informed blocking) - 16.6 —
BypassManager(kernel-level forwarding for verified-clean flows) - 16.7 —
InlinePipeline(orchestrator for inline IPS lifecycle) - 16.8 — Fail-open / fail-closed modes + watchdog
- 16.9 — Docker sandbox dual-network topology + integration tests
Dependencies: Linux kernel headers, libnetfilter_queue, Phase 15 Platform: Linux only (passive mode remains cross-platform)
These items are documented for completeness but are not planned for near-term work.
| Item | Source | Notes |
|---|---|---|
| Web dashboard | README.md | Would replace or supplement Qt UI for remote monitoring |
| NLFlowLyzer feature extraction | ADR-004 | Requires reimplementing NLFlowLyzer in C++ |
| Hyperparameter search (Optuna) | ADR-004 | Low priority given accuracy ceiling evidence |
| Additional ML backends (TensorRT, OpenVINO) | architecture.md | AnalyzerFactory designed for extensibility |
| Concept-drift detection / auto-retraining | ADR-004, ADR-005 | Requires monitoring infrastructure |
| NSIS Windows installer | AGENTS.md | CPack configuration for Windows |
IProtocolParser strategy interface |
AGENTS.md §5.1 | For pluggable protocol parsers |
FilterBuilder builder pattern |
Done — FilterBuilder in core/model/PacketFilter.h |
|
IAnalysisRepository repository pattern |
Done — interface in core/services/IAnalysisRepository.h |
|
Done — ICommand + CaptureCommands in app/commands/ |
||
std::expected<T, E> error handling |
Done — loadModel(), loadMetadata(), initialize() return std::expected |
|
ServiceRegistry optimize to unordered_map |
ServiceRegistry.h:23 |
Done — already uses std::unordered_map |
| Planned — Phase 14 | ||
| Planned — Phase 15 (see ADR-008) | ||
| Planned — Phase 16 (see ADR-008) |
For implementation, the recommended order is:
Phase 6 — Cleanup + tests + config[DONE]Phase 7 — UI for hybrid results[DONE]Phase 8 — Real-time flow extraction[DONE]Phase 9 — gRPC server/client[DONE]- Phase 10 — Model improvements (iterative, can be done in parallel with others)
- Phase 11 — Documentation polish (ongoing)
- Phase 12 — SIEM output sinks (4-5 weeks, zero new deps, immediate operational value)
- Phase 13 — Threat hunting (6-8 weeks, SQLite, high SOC value)
- Phase 14 — YARA rules (6-8 weeks, libyara, malware/C2 detection)
- Phase 15 — Snort rules (10-14 weeks, PCRE2, comprehensive signature coverage)
- Phase 16 — Inline IPS gateway (13-18 weeks, Linux-only, depends on Phase 15)