Skip to content

Comments

fix: Update the maestro SPIKE with real implementation#88

Open
xueli181114 wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
xueli181114:HYPERFLEET-530
Open

fix: Update the maestro SPIKE with real implementation#88
xueli181114 wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
xueli181114:HYPERFLEET-530

Conversation

@xueli181114
Copy link
Contributor

@xueli181114 xueli181114 commented Feb 12, 2026

Summary by CodeRabbit

Release Notes

  • Documentation

    • Updated adapter integration documentation with clearer transport and status handling terminology.
    • Enhanced maestro integration guidance with explicit configuration field references.
  • Chores

    • Restructured adapter configuration templates to use Kubernetes-style CRD format with improved organization of parameters, preconditions, and resources.
    • Updated manifest injection approach with explicit inline resource definitions.
    • Refined adapter deployment configuration with streamlined client settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 12, 2026

Walkthrough

These changes restructure the adapter framework configuration from a flat template structure to a Kubernetes CRD-style AdapterTaskConfig with versioned API (hyperfleet.redhat.com/v1alpha1). The business logic MVP template migrates from inline resource definitions to a manifest-reference approach with explicit preconditions, structured parameters (clusterId, generationId), and streamlined CEL-based status conditions in post-processing. The deployment config updates the adapter identity from aro-hcp-adapter to example-adapter, restructures client configuration by replacing httpAPI and maestro blocks with a hyperfleetApi block, and introduces broker support. The manifestwork reference now uses explicit inline manifests instead of framework-driven injection. Documentation and code reflect terminology clarification (kubernetes vs maestro transports) and rename TransportMeta.Type to TransportMeta.Client.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • rh-amarin
  • ciaranRoche
🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title claims to fix and update a SPIKE with real implementation, but the actual changes include significant structural refactoring of adapter configuration templates and API changes (TransportMeta field rename), which extend beyond merely updating documentation. Clarify whether this PR is primarily a documentation update or a broader refactoring that includes configuration schema changes and public API modifications.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In
`@hyperfleet/components/adapter/framework/configs/adapter-business-logic-template-MVP.yaml`:
- Around line 96-124: The discovery names for the ManifestWork and ConfigMap are
incorrect: update the top-level discovery.byName for resource
"agentNamespaceManifestWork" to match the ManifestWork name template
"hyperfleet-cluster-setup-{{ .clusterId }}", and change the nestedDiscovery
"mgmtConfigMap" discovery.byName to the actual ConfigMap name "cluster-config"
(leave "mgmtNamespace" selector as-is); reference resource name
agentNamespaceManifestWork and nestedDiscoveries mgmtNamespace and mgmtConfigMap
when making the edits.

In
`@hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml`:
- Around line 24-27: The deployment template currently enables verbose
configuration dumping by setting debugConfig: true and log.level: debug; change
the defaults to safer values (set debugConfig to false and log.level to info or
warn) so the merged config (which may include secrets) is not emitted by
default, and add/mention an explicit opt-in mechanism (e.g., an environment
variable or a separate debug flag like DEBUG_CONFIG or ADAPTER_DEBUG) so
developers can enable full debug output intentionally in non-production
scenarios; update any documentation/comments near the debugConfig and log.level
entries to reflect the new safe defaults and the opt-in mechanism.
- Around line 46-65: The Maestro client default is insecure: true which disables
TLS; change the default to insecure: false in the maestro config block (field
name: insecure) so production deployments use TLS, update the inline comment to
state "Set to true only for local development/testing" and reference the
corresponding environment variable (HYPERFLEET_MAESTRO_INSECURE) in the comment
so operators know how to override it for dev; ensure any README or deployment
docs mention the safe default and how to enable insecure mode for development.

In `@hyperfleet/components/adapter/framework/configs/manifestwork-ref.yaml`:
- Around line 75-98: The Namespace manifest uses "{{ .clusterId | lower }}" but
the ConfigMap still targets "{{ .clusterId }}" and some data/labels use the
non-lowercased value, causing namespace mismatch; also the label
hyperfleet.io/managed-by uses an undocumented "{{ .metadata.name }}" variable
which may be empty—update the ConfigMap.metadata.namespace to use "{{ .clusterId
| lower }}", update any labels and data fields that reference clusterId (e.g.,
metadata.labels.hyperfleet.io/cluster-id and data.cluster_id) to use the same
"{{ .clusterId | lower }}" form so they match the Namespace, and remove or
replace the hyperfleet.io/managed-by: "{{ .metadata.name }}" label with a
documented template variable or drop that label entirely (adjust manifests in
the inline YAML where Namespace and ConfigMap are defined).

Comment on lines +96 to +124
# All resources are created/updated sequentially in the order defined below
resources:
- name: "agentNamespaceManifestWork"
transport:
client: "maestro"
maestro:
targetCluster: "{{ .placementClusterName }}"

# ManifestWork manifest reference (external file recommended)
manifest:
ref: "/etc/adapter/manifestwork.yaml"

# Discover the manifestWork by name
discovery:
byName: "{{ .clusterId }}"

# Health: Adapter execution status (runtime) Don't need to update this. This can be reused from the adapter config.
- type: "Health"
status:
expression: |
adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown")
reason:
expression: |
adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy"
message:
expression: |
adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully"

# Use CEL expression for numeric fields to preserve type (not Go template which outputs strings)
observed_generation:
expression: "generationId"
# Discover sub-resources within the manifestWork
# Supports jsonPath to dig into resource status
nestedDiscoveries:
- name: "mgmtNamespace"
discovery:
bySelectors:
labelSelector:
hyperfleet.io/resource-type: "namespace"
hyperfleet.io/cluster-id: "{{ .clusterId }}"
- name: "mgmtConfigMap"
discovery:
byName: "{{ .clusterId }}"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Align discovery names with ManifestWork and ConfigMap names.

manifestwork-ref.yaml defines the ManifestWork name as hyperfleet-cluster-setup-{{ .clusterId }} and the ConfigMap name as cluster-config, but discovery uses {{ .clusterId }} for both. Status discovery will fail if names don’t match.

🛠️ Proposed fix
       discovery:
-        byName: "{{ .clusterId }}"
+        byName: "hyperfleet-cluster-setup-{{ .clusterId }}"
 ...
         - name: "mgmtConfigMap"
           discovery:
-            byName: "{{ .clusterId }}"
+            byName: "cluster-config"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# All resources are created/updated sequentially in the order defined below
resources:
- name: "agentNamespaceManifestWork"
transport:
client: "maestro"
maestro:
targetCluster: "{{ .placementClusterName }}"
# ManifestWork manifest reference (external file recommended)
manifest:
ref: "/etc/adapter/manifestwork.yaml"
# Discover the manifestWork by name
discovery:
byName: "{{ .clusterId }}"
# Health: Adapter execution status (runtime) Don't need to update this. This can be reused from the adapter config.
- type: "Health"
status:
expression: |
adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown")
reason:
expression: |
adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy"
message:
expression: |
adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully"
# Use CEL expression for numeric fields to preserve type (not Go template which outputs strings)
observed_generation:
expression: "generationId"
# Discover sub-resources within the manifestWork
# Supports jsonPath to dig into resource status
nestedDiscoveries:
- name: "mgmtNamespace"
discovery:
bySelectors:
labelSelector:
hyperfleet.io/resource-type: "namespace"
hyperfleet.io/cluster-id: "{{ .clusterId }}"
- name: "mgmtConfigMap"
discovery:
byName: "{{ .clusterId }}"
# All resources are created/updated sequentially in the order defined below
resources:
- name: "agentNamespaceManifestWork"
transport:
client: "maestro"
maestro:
targetCluster: "{{ .placementClusterName }}"
# ManifestWork manifest reference (external file recommended)
manifest:
ref: "/etc/adapter/manifestwork.yaml"
# Discover the manifestWork by name
discovery:
byName: "hyperfleet-cluster-setup-{{ .clusterId }}"
# Discover sub-resources within the manifestWork
# Supports jsonPath to dig into resource status
nestedDiscoveries:
- name: "mgmtNamespace"
discovery:
bySelectors:
labelSelector:
hyperfleet.io/resource-type: "namespace"
hyperfleet.io/cluster-id: "{{ .clusterId }}"
- name: "mgmtConfigMap"
discovery:
byName: "cluster-config"
🤖 Prompt for AI Agents
In
`@hyperfleet/components/adapter/framework/configs/adapter-business-logic-template-MVP.yaml`
around lines 96 - 124, The discovery names for the ManifestWork and ConfigMap
are incorrect: update the top-level discovery.byName for resource
"agentNamespaceManifestWork" to match the ManifestWork name template
"hyperfleet-cluster-setup-{{ .clusterId }}", and change the nestedDiscovery
"mgmtConfigMap" discovery.byName to the actual ConfigMap name "cluster-config"
(leave "mgmtNamespace" selector as-is); reference resource name
agentNamespaceManifestWork and nestedDiscoveries mgmtNamespace and mgmtConfigMap
when making the edits.

Comment on lines +24 to +27
# Log the full merged configuration after load (default: false)
debugConfig: true
log:
level: debug
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Default debug config may expose sensitive data.

debugConfig: true plus level: debug can emit full merged config (often including secrets). Defaulting to safer settings reduces accidental leakage in production.

🛠️ Proposed fix
-  debugConfig: true
-  log:
-    level: debug
+  debugConfig: false
+  log:
+    level: info
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Log the full merged configuration after load (default: false)
debugConfig: true
log:
level: debug
# Log the full merged configuration after load (default: false)
debugConfig: false
log:
level: info
🤖 Prompt for AI Agents
In
`@hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml`
around lines 24 - 27, The deployment template currently enables verbose
configuration dumping by setting debugConfig: true and log.level: debug; change
the defaults to safer values (set debugConfig to false and log.level to info or
warn) so the merged config (which may include secrets) is not emitted by
default, and add/mention an explicit opt-in mechanism (e.g., an environment
variable or a separate debug flag like DEBUG_CONFIG or ADAPTER_DEBUG) so
developers can enable full debug output intentionally in non-production
scenarios; update any documentation/comments near the debugConfig and log.level
entries to reflect the new safe defaults and the opt-in mechanism.

Comment on lines +46 to +65
# Maestro transport client (optional - only if adapter uses Maestro transport)
maestro:
# Maestro gRPC server connection (for ManifestWork operations)
# Environment variable: HYPERFLEET_MAESTRO_GRPC_SERVER_ADDRESS
grpcServerAddress: "maestro-grpc.maestro.svc.cluster.local:8090"

# HTTPS server address for REST API operations (optional)
# Environment variable: HYPERFLEET_MAESTRO_HTTP_SERVER_ADDRESS
httpServerAddress: "http://maestro.maestro.svc.cluster.local:8000"

# Source identifier for CloudEvents routing (must be unique across adapters)
# Environment variable: HYPERFLEET_MAESTRO_SOURCE_ID
sourceId: "hyperfleet-adapter"

# Client identifier (defaults to sourceId if not specified)
# Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID
clientId: "hyperfleet-adapter-client"

# Set to true for development without TLS
insecure: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid shipping an insecure Maestro client by default.

insecure: true disables TLS and weakens the default security posture. Safer default is false with a note to enable for dev.

🛠️ Proposed fix
-      insecure: true
+      insecure: false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Maestro transport client (optional - only if adapter uses Maestro transport)
maestro:
# Maestro gRPC server connection (for ManifestWork operations)
# Environment variable: HYPERFLEET_MAESTRO_GRPC_SERVER_ADDRESS
grpcServerAddress: "maestro-grpc.maestro.svc.cluster.local:8090"
# HTTPS server address for REST API operations (optional)
# Environment variable: HYPERFLEET_MAESTRO_HTTP_SERVER_ADDRESS
httpServerAddress: "http://maestro.maestro.svc.cluster.local:8000"
# Source identifier for CloudEvents routing (must be unique across adapters)
# Environment variable: HYPERFLEET_MAESTRO_SOURCE_ID
sourceId: "hyperfleet-adapter"
# Client identifier (defaults to sourceId if not specified)
# Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID
clientId: "hyperfleet-adapter-client"
# Set to true for development without TLS
insecure: true
# Maestro transport client (optional - only if adapter uses Maestro transport)
maestro:
# Maestro gRPC server connection (for ManifestWork operations)
# Environment variable: HYPERFLEET_MAESTRO_GRPC_SERVER_ADDRESS
grpcServerAddress: "maestro-grpc.maestro.svc.cluster.local:8090"
# HTTPS server address for REST API operations (optional)
# Environment variable: HYPERFLEET_MAESTRO_HTTP_SERVER_ADDRESS
httpServerAddress: "http://maestro.maestro.svc.cluster.local:8000"
# Source identifier for CloudEvents routing (must be unique across adapters)
# Environment variable: HYPERFLEET_MAESTRO_SOURCE_ID
sourceId: "hyperfleet-adapter"
# Client identifier (defaults to sourceId if not specified)
# Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID
clientId: "hyperfleet-adapter-client"
# Set to true for development without TLS
insecure: false
🤖 Prompt for AI Agents
In
`@hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml`
around lines 46 - 65, The Maestro client default is insecure: true which
disables TLS; change the default to insecure: false in the maestro config block
(field name: insecure) so production deployments use TLS, update the inline
comment to state "Set to true only for local development/testing" and reference
the corresponding environment variable (HYPERFLEET_MAESTRO_INSECURE) in the
comment so operators know how to override it for dev; ensure any README or
deployment docs mention the safe default and how to enable insecure mode for
development.

Comment on lines +75 to +98
manifests:
- apiVersion: v1
kind: Namespace
metadata:
name: "{{ .clusterId | lower }}"
labels:
hyperfleet.io/cluster-id: "{{ .clusterId }}"
hyperfleet.io/managed-by: "{{ .metadata.name }}"
hyperfleet.io/resource-type: "namespace"
annotations:
hyperfleet.io/created-by: "hyperfleet-adapter"
hyperfleet.io/generation: "{{ .generationId }}"
- apiVersion: v1
kind: ConfigMap
metadata:
name: "cluster-config"
namespace: "{{ .clusterId }}"
labels:
hyperfleet.io/cluster-id: "{{ .clusterId }}"
annotations:
hyperfleet.io/generation: "{{ .generationId }}"
data:
cluster_id: "{{ .clusterId }}"
cluster_name: "{{ .clusterName }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix template var usage and namespace mismatch in inline manifests.

Namespace uses {{ .clusterId | lower }}, but the ConfigMap targets {{ .clusterId }}; if clusterId isn’t already lowercase, the ConfigMap will land in a non-existent namespace. Also, {{ .metadata.name }} isn’t in the documented template variables list, so the label may resolve empty.

🛠️ Proposed fix
-          hyperfleet.io/managed-by: "{{ .metadata.name }}"
+          hyperfleet.io/managed-by: "{{ .adapterName }}"
...
-        namespace: "{{ .clusterId }}"
+        namespace: "{{ .clusterId | lower }}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
manifests:
- apiVersion: v1
kind: Namespace
metadata:
name: "{{ .clusterId | lower }}"
labels:
hyperfleet.io/cluster-id: "{{ .clusterId }}"
hyperfleet.io/managed-by: "{{ .metadata.name }}"
hyperfleet.io/resource-type: "namespace"
annotations:
hyperfleet.io/created-by: "hyperfleet-adapter"
hyperfleet.io/generation: "{{ .generationId }}"
- apiVersion: v1
kind: ConfigMap
metadata:
name: "cluster-config"
namespace: "{{ .clusterId }}"
labels:
hyperfleet.io/cluster-id: "{{ .clusterId }}"
annotations:
hyperfleet.io/generation: "{{ .generationId }}"
data:
cluster_id: "{{ .clusterId }}"
cluster_name: "{{ .clusterName }}"
manifests:
- apiVersion: v1
kind: Namespace
metadata:
name: "{{ .clusterId | lower }}"
labels:
hyperfleet.io/cluster-id: "{{ .clusterId }}"
hyperfleet.io/managed-by: "{{ .adapterName }}"
hyperfleet.io/resource-type: "namespace"
annotations:
hyperfleet.io/created-by: "hyperfleet-adapter"
hyperfleet.io/generation: "{{ .generationId }}"
- apiVersion: v1
kind: ConfigMap
metadata:
name: "cluster-config"
namespace: "{{ .clusterId | lower }}"
labels:
hyperfleet.io/cluster-id: "{{ .clusterId }}"
annotations:
hyperfleet.io/generation: "{{ .generationId }}"
data:
cluster_id: "{{ .clusterId }}"
cluster_name: "{{ .clusterName }}"
🤖 Prompt for AI Agents
In `@hyperfleet/components/adapter/framework/configs/manifestwork-ref.yaml` around
lines 75 - 98, The Namespace manifest uses "{{ .clusterId | lower }}" but the
ConfigMap still targets "{{ .clusterId }}" and some data/labels use the
non-lowercased value, causing namespace mismatch; also the label
hyperfleet.io/managed-by uses an undocumented "{{ .metadata.name }}" variable
which may be empty—update the ConfigMap.metadata.namespace to use "{{ .clusterId
| lower }}", update any labels and data fields that reference clusterId (e.g.,
metadata.labels.hyperfleet.io/cluster-id and data.cluster_id) to use the same
"{{ .clusterId | lower }}" form so they match the Namespace, and remove or
replace the hyperfleet.io/managed-by: "{{ .metadata.name }}" label with a
documented template variable or drop that label entirely (adjust manifests in
the inline YAML where Namespace and ConfigMap are defined).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant