fix: Update the maestro SPIKE with real implementation#88
fix: Update the maestro SPIKE with real implementation#88xueli181114 wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
Conversation
WalkthroughThese changes restructure the adapter framework configuration from a flat template structure to a Kubernetes CRD-style AdapterTaskConfig with versioned API (hyperfleet.redhat.com/v1alpha1). The business logic MVP template migrates from inline resource definitions to a manifest-reference approach with explicit preconditions, structured parameters (clusterId, generationId), and streamlined CEL-based status conditions in post-processing. The deployment config updates the adapter identity from aro-hcp-adapter to example-adapter, restructures client configuration by replacing httpAPI and maestro blocks with a hyperfleetApi block, and introduces broker support. The manifestwork reference now uses explicit inline manifests instead of framework-driven injection. Documentation and code reflect terminology clarification (kubernetes vs maestro transports) and rename TransportMeta.Type to TransportMeta.Client. Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In
`@hyperfleet/components/adapter/framework/configs/adapter-business-logic-template-MVP.yaml`:
- Around line 96-124: The discovery names for the ManifestWork and ConfigMap are
incorrect: update the top-level discovery.byName for resource
"agentNamespaceManifestWork" to match the ManifestWork name template
"hyperfleet-cluster-setup-{{ .clusterId }}", and change the nestedDiscovery
"mgmtConfigMap" discovery.byName to the actual ConfigMap name "cluster-config"
(leave "mgmtNamespace" selector as-is); reference resource name
agentNamespaceManifestWork and nestedDiscoveries mgmtNamespace and mgmtConfigMap
when making the edits.
In
`@hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml`:
- Around line 24-27: The deployment template currently enables verbose
configuration dumping by setting debugConfig: true and log.level: debug; change
the defaults to safer values (set debugConfig to false and log.level to info or
warn) so the merged config (which may include secrets) is not emitted by
default, and add/mention an explicit opt-in mechanism (e.g., an environment
variable or a separate debug flag like DEBUG_CONFIG or ADAPTER_DEBUG) so
developers can enable full debug output intentionally in non-production
scenarios; update any documentation/comments near the debugConfig and log.level
entries to reflect the new safe defaults and the opt-in mechanism.
- Around line 46-65: The Maestro client default is insecure: true which disables
TLS; change the default to insecure: false in the maestro config block (field
name: insecure) so production deployments use TLS, update the inline comment to
state "Set to true only for local development/testing" and reference the
corresponding environment variable (HYPERFLEET_MAESTRO_INSECURE) in the comment
so operators know how to override it for dev; ensure any README or deployment
docs mention the safe default and how to enable insecure mode for development.
In `@hyperfleet/components/adapter/framework/configs/manifestwork-ref.yaml`:
- Around line 75-98: The Namespace manifest uses "{{ .clusterId | lower }}" but
the ConfigMap still targets "{{ .clusterId }}" and some data/labels use the
non-lowercased value, causing namespace mismatch; also the label
hyperfleet.io/managed-by uses an undocumented "{{ .metadata.name }}" variable
which may be empty—update the ConfigMap.metadata.namespace to use "{{ .clusterId
| lower }}", update any labels and data fields that reference clusterId (e.g.,
metadata.labels.hyperfleet.io/cluster-id and data.cluster_id) to use the same
"{{ .clusterId | lower }}" form so they match the Namespace, and remove or
replace the hyperfleet.io/managed-by: "{{ .metadata.name }}" label with a
documented template variable or drop that label entirely (adjust manifests in
the inline YAML where Namespace and ConfigMap are defined).
| # All resources are created/updated sequentially in the order defined below | ||
| resources: | ||
| - name: "agentNamespaceManifestWork" | ||
| transport: | ||
| client: "maestro" | ||
| maestro: | ||
| targetCluster: "{{ .placementClusterName }}" | ||
|
|
||
| # ManifestWork manifest reference (external file recommended) | ||
| manifest: | ||
| ref: "/etc/adapter/manifestwork.yaml" | ||
|
|
||
| # Discover the manifestWork by name | ||
| discovery: | ||
| byName: "{{ .clusterId }}" | ||
|
|
||
| # Health: Adapter execution status (runtime) Don't need to update this. This can be reused from the adapter config. | ||
| - type: "Health" | ||
| status: | ||
| expression: | | ||
| adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown") | ||
| reason: | ||
| expression: | | ||
| adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy" | ||
| message: | ||
| expression: | | ||
| adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully" | ||
|
|
||
| # Use CEL expression for numeric fields to preserve type (not Go template which outputs strings) | ||
| observed_generation: | ||
| expression: "generationId" | ||
| # Discover sub-resources within the manifestWork | ||
| # Supports jsonPath to dig into resource status | ||
| nestedDiscoveries: | ||
| - name: "mgmtNamespace" | ||
| discovery: | ||
| bySelectors: | ||
| labelSelector: | ||
| hyperfleet.io/resource-type: "namespace" | ||
| hyperfleet.io/cluster-id: "{{ .clusterId }}" | ||
| - name: "mgmtConfigMap" | ||
| discovery: | ||
| byName: "{{ .clusterId }}" | ||
|
|
There was a problem hiding this comment.
Align discovery names with ManifestWork and ConfigMap names.
manifestwork-ref.yaml defines the ManifestWork name as hyperfleet-cluster-setup-{{ .clusterId }} and the ConfigMap name as cluster-config, but discovery uses {{ .clusterId }} for both. Status discovery will fail if names don’t match.
🛠️ Proposed fix
discovery:
- byName: "{{ .clusterId }}"
+ byName: "hyperfleet-cluster-setup-{{ .clusterId }}"
...
- name: "mgmtConfigMap"
discovery:
- byName: "{{ .clusterId }}"
+ byName: "cluster-config"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # All resources are created/updated sequentially in the order defined below | |
| resources: | |
| - name: "agentNamespaceManifestWork" | |
| transport: | |
| client: "maestro" | |
| maestro: | |
| targetCluster: "{{ .placementClusterName }}" | |
| # ManifestWork manifest reference (external file recommended) | |
| manifest: | |
| ref: "/etc/adapter/manifestwork.yaml" | |
| # Discover the manifestWork by name | |
| discovery: | |
| byName: "{{ .clusterId }}" | |
| # Health: Adapter execution status (runtime) Don't need to update this. This can be reused from the adapter config. | |
| - type: "Health" | |
| status: | |
| expression: | | |
| adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown") | |
| reason: | |
| expression: | | |
| adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy" | |
| message: | |
| expression: | | |
| adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully" | |
| # Use CEL expression for numeric fields to preserve type (not Go template which outputs strings) | |
| observed_generation: | |
| expression: "generationId" | |
| # Discover sub-resources within the manifestWork | |
| # Supports jsonPath to dig into resource status | |
| nestedDiscoveries: | |
| - name: "mgmtNamespace" | |
| discovery: | |
| bySelectors: | |
| labelSelector: | |
| hyperfleet.io/resource-type: "namespace" | |
| hyperfleet.io/cluster-id: "{{ .clusterId }}" | |
| - name: "mgmtConfigMap" | |
| discovery: | |
| byName: "{{ .clusterId }}" | |
| # All resources are created/updated sequentially in the order defined below | |
| resources: | |
| - name: "agentNamespaceManifestWork" | |
| transport: | |
| client: "maestro" | |
| maestro: | |
| targetCluster: "{{ .placementClusterName }}" | |
| # ManifestWork manifest reference (external file recommended) | |
| manifest: | |
| ref: "/etc/adapter/manifestwork.yaml" | |
| # Discover the manifestWork by name | |
| discovery: | |
| byName: "hyperfleet-cluster-setup-{{ .clusterId }}" | |
| # Discover sub-resources within the manifestWork | |
| # Supports jsonPath to dig into resource status | |
| nestedDiscoveries: | |
| - name: "mgmtNamespace" | |
| discovery: | |
| bySelectors: | |
| labelSelector: | |
| hyperfleet.io/resource-type: "namespace" | |
| hyperfleet.io/cluster-id: "{{ .clusterId }}" | |
| - name: "mgmtConfigMap" | |
| discovery: | |
| byName: "cluster-config" |
🤖 Prompt for AI Agents
In
`@hyperfleet/components/adapter/framework/configs/adapter-business-logic-template-MVP.yaml`
around lines 96 - 124, The discovery names for the ManifestWork and ConfigMap
are incorrect: update the top-level discovery.byName for resource
"agentNamespaceManifestWork" to match the ManifestWork name template
"hyperfleet-cluster-setup-{{ .clusterId }}", and change the nestedDiscovery
"mgmtConfigMap" discovery.byName to the actual ConfigMap name "cluster-config"
(leave "mgmtNamespace" selector as-is); reference resource name
agentNamespaceManifestWork and nestedDiscoveries mgmtNamespace and mgmtConfigMap
when making the edits.
| # Log the full merged configuration after load (default: false) | ||
| debugConfig: true | ||
| log: | ||
| level: debug |
There was a problem hiding this comment.
Default debug config may expose sensitive data.
debugConfig: true plus level: debug can emit full merged config (often including secrets). Defaulting to safer settings reduces accidental leakage in production.
🛠️ Proposed fix
- debugConfig: true
- log:
- level: debug
+ debugConfig: false
+ log:
+ level: info📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Log the full merged configuration after load (default: false) | |
| debugConfig: true | |
| log: | |
| level: debug | |
| # Log the full merged configuration after load (default: false) | |
| debugConfig: false | |
| log: | |
| level: info |
🤖 Prompt for AI Agents
In
`@hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml`
around lines 24 - 27, The deployment template currently enables verbose
configuration dumping by setting debugConfig: true and log.level: debug; change
the defaults to safer values (set debugConfig to false and log.level to info or
warn) so the merged config (which may include secrets) is not emitted by
default, and add/mention an explicit opt-in mechanism (e.g., an environment
variable or a separate debug flag like DEBUG_CONFIG or ADAPTER_DEBUG) so
developers can enable full debug output intentionally in non-production
scenarios; update any documentation/comments near the debugConfig and log.level
entries to reflect the new safe defaults and the opt-in mechanism.
| # Maestro transport client (optional - only if adapter uses Maestro transport) | ||
| maestro: | ||
| # Maestro gRPC server connection (for ManifestWork operations) | ||
| # Environment variable: HYPERFLEET_MAESTRO_GRPC_SERVER_ADDRESS | ||
| grpcServerAddress: "maestro-grpc.maestro.svc.cluster.local:8090" | ||
|
|
||
| # HTTPS server address for REST API operations (optional) | ||
| # Environment variable: HYPERFLEET_MAESTRO_HTTP_SERVER_ADDRESS | ||
| httpServerAddress: "http://maestro.maestro.svc.cluster.local:8000" | ||
|
|
||
| # Source identifier for CloudEvents routing (must be unique across adapters) | ||
| # Environment variable: HYPERFLEET_MAESTRO_SOURCE_ID | ||
| sourceId: "hyperfleet-adapter" | ||
|
|
||
| # Client identifier (defaults to sourceId if not specified) | ||
| # Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID | ||
| clientId: "hyperfleet-adapter-client" | ||
|
|
||
| # Set to true for development without TLS | ||
| insecure: true |
There was a problem hiding this comment.
Avoid shipping an insecure Maestro client by default.
insecure: true disables TLS and weakens the default security posture. Safer default is false with a note to enable for dev.
🛠️ Proposed fix
- insecure: true
+ insecure: false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Maestro transport client (optional - only if adapter uses Maestro transport) | |
| maestro: | |
| # Maestro gRPC server connection (for ManifestWork operations) | |
| # Environment variable: HYPERFLEET_MAESTRO_GRPC_SERVER_ADDRESS | |
| grpcServerAddress: "maestro-grpc.maestro.svc.cluster.local:8090" | |
| # HTTPS server address for REST API operations (optional) | |
| # Environment variable: HYPERFLEET_MAESTRO_HTTP_SERVER_ADDRESS | |
| httpServerAddress: "http://maestro.maestro.svc.cluster.local:8000" | |
| # Source identifier for CloudEvents routing (must be unique across adapters) | |
| # Environment variable: HYPERFLEET_MAESTRO_SOURCE_ID | |
| sourceId: "hyperfleet-adapter" | |
| # Client identifier (defaults to sourceId if not specified) | |
| # Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID | |
| clientId: "hyperfleet-adapter-client" | |
| # Set to true for development without TLS | |
| insecure: true | |
| # Maestro transport client (optional - only if adapter uses Maestro transport) | |
| maestro: | |
| # Maestro gRPC server connection (for ManifestWork operations) | |
| # Environment variable: HYPERFLEET_MAESTRO_GRPC_SERVER_ADDRESS | |
| grpcServerAddress: "maestro-grpc.maestro.svc.cluster.local:8090" | |
| # HTTPS server address for REST API operations (optional) | |
| # Environment variable: HYPERFLEET_MAESTRO_HTTP_SERVER_ADDRESS | |
| httpServerAddress: "http://maestro.maestro.svc.cluster.local:8000" | |
| # Source identifier for CloudEvents routing (must be unique across adapters) | |
| # Environment variable: HYPERFLEET_MAESTRO_SOURCE_ID | |
| sourceId: "hyperfleet-adapter" | |
| # Client identifier (defaults to sourceId if not specified) | |
| # Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID | |
| clientId: "hyperfleet-adapter-client" | |
| # Set to true for development without TLS | |
| insecure: false |
🤖 Prompt for AI Agents
In
`@hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml`
around lines 46 - 65, The Maestro client default is insecure: true which
disables TLS; change the default to insecure: false in the maestro config block
(field name: insecure) so production deployments use TLS, update the inline
comment to state "Set to true only for local development/testing" and reference
the corresponding environment variable (HYPERFLEET_MAESTRO_INSECURE) in the
comment so operators know how to override it for dev; ensure any README or
deployment docs mention the safe default and how to enable insecure mode for
development.
| manifests: | ||
| - apiVersion: v1 | ||
| kind: Namespace | ||
| metadata: | ||
| name: "{{ .clusterId | lower }}" | ||
| labels: | ||
| hyperfleet.io/cluster-id: "{{ .clusterId }}" | ||
| hyperfleet.io/managed-by: "{{ .metadata.name }}" | ||
| hyperfleet.io/resource-type: "namespace" | ||
| annotations: | ||
| hyperfleet.io/created-by: "hyperfleet-adapter" | ||
| hyperfleet.io/generation: "{{ .generationId }}" | ||
| - apiVersion: v1 | ||
| kind: ConfigMap | ||
| metadata: | ||
| name: "cluster-config" | ||
| namespace: "{{ .clusterId }}" | ||
| labels: | ||
| hyperfleet.io/cluster-id: "{{ .clusterId }}" | ||
| annotations: | ||
| hyperfleet.io/generation: "{{ .generationId }}" | ||
| data: | ||
| cluster_id: "{{ .clusterId }}" | ||
| cluster_name: "{{ .clusterName }}" |
There was a problem hiding this comment.
Fix template var usage and namespace mismatch in inline manifests.
Namespace uses {{ .clusterId | lower }}, but the ConfigMap targets {{ .clusterId }}; if clusterId isn’t already lowercase, the ConfigMap will land in a non-existent namespace. Also, {{ .metadata.name }} isn’t in the documented template variables list, so the label may resolve empty.
🛠️ Proposed fix
- hyperfleet.io/managed-by: "{{ .metadata.name }}"
+ hyperfleet.io/managed-by: "{{ .adapterName }}"
...
- namespace: "{{ .clusterId }}"
+ namespace: "{{ .clusterId | lower }}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| manifests: | |
| - apiVersion: v1 | |
| kind: Namespace | |
| metadata: | |
| name: "{{ .clusterId | lower }}" | |
| labels: | |
| hyperfleet.io/cluster-id: "{{ .clusterId }}" | |
| hyperfleet.io/managed-by: "{{ .metadata.name }}" | |
| hyperfleet.io/resource-type: "namespace" | |
| annotations: | |
| hyperfleet.io/created-by: "hyperfleet-adapter" | |
| hyperfleet.io/generation: "{{ .generationId }}" | |
| - apiVersion: v1 | |
| kind: ConfigMap | |
| metadata: | |
| name: "cluster-config" | |
| namespace: "{{ .clusterId }}" | |
| labels: | |
| hyperfleet.io/cluster-id: "{{ .clusterId }}" | |
| annotations: | |
| hyperfleet.io/generation: "{{ .generationId }}" | |
| data: | |
| cluster_id: "{{ .clusterId }}" | |
| cluster_name: "{{ .clusterName }}" | |
| manifests: | |
| - apiVersion: v1 | |
| kind: Namespace | |
| metadata: | |
| name: "{{ .clusterId | lower }}" | |
| labels: | |
| hyperfleet.io/cluster-id: "{{ .clusterId }}" | |
| hyperfleet.io/managed-by: "{{ .adapterName }}" | |
| hyperfleet.io/resource-type: "namespace" | |
| annotations: | |
| hyperfleet.io/created-by: "hyperfleet-adapter" | |
| hyperfleet.io/generation: "{{ .generationId }}" | |
| - apiVersion: v1 | |
| kind: ConfigMap | |
| metadata: | |
| name: "cluster-config" | |
| namespace: "{{ .clusterId | lower }}" | |
| labels: | |
| hyperfleet.io/cluster-id: "{{ .clusterId }}" | |
| annotations: | |
| hyperfleet.io/generation: "{{ .generationId }}" | |
| data: | |
| cluster_id: "{{ .clusterId }}" | |
| cluster_name: "{{ .clusterName }}" |
🤖 Prompt for AI Agents
In `@hyperfleet/components/adapter/framework/configs/manifestwork-ref.yaml` around
lines 75 - 98, The Namespace manifest uses "{{ .clusterId | lower }}" but the
ConfigMap still targets "{{ .clusterId }}" and some data/labels use the
non-lowercased value, causing namespace mismatch; also the label
hyperfleet.io/managed-by uses an undocumented "{{ .metadata.name }}" variable
which may be empty—update the ConfigMap.metadata.namespace to use "{{ .clusterId
| lower }}", update any labels and data fields that reference clusterId (e.g.,
metadata.labels.hyperfleet.io/cluster-id and data.cluster_id) to use the same
"{{ .clusterId | lower }}" form so they match the Namespace, and remove or
replace the hyperfleet.io/managed-by: "{{ .metadata.name }}" label with a
documented template variable or drop that label entirely (adjust manifests in
the inline YAML where Namespace and ConfigMap are defined).
Summary by CodeRabbit
Release Notes
Documentation
Chores