Status: DEPLOYED + DUCKLAKE SCHEMA UPDATED — working
Added soroban_op_count, total_fee_charged, contract_events_count to the bronze ledgers API response. Also updated the DuckLake catalog metadata to include these columns so cold storage queries resolve them.
Code changes:
obsrvr-lake/stellar-query-api/go/hot_reader.go— Added 3 columns to SELECT inQueryLedgers()obsrvr-lake/stellar-query-api/go/cold_reader.go— Added 3 columns to SELECT in coldQueryLedgers()obsrvr-lake/stellar-query-api/go/query_service.go— Addedsql.NullInt64scan variables and conditional result map entries inscanLedgers()
DuckLake schema change:
- Inserted 3 rows into
bronze_meta.ducklake_columnfortable_id=42(activeledgers_row_v2):- column_id 69:
soroban_op_count(int32), column_order 27 - column_id 70:
total_fee_charged(int64), column_order 28 - column_id 71:
contract_events_count(int32), column_order 29 - All with
begin_snapshot=4145,nulls_allowed=true,initial_default=NULL,default_value=NULL
- column_id 69:
Current behavior: Columns appear in API response and cold storage queries resolve without error. Historical parquet files return null for these fields (data was written before the columns existed). New ledger data flushed to cold by the ingester will include real values.
Verification:
curl -H "$AUTH" "$BASE/bronze/ledgers?start=1363100&end=1363130&limit=1" | jq '.ledgers[0] | {soroban_op_count, total_fee_charged, contract_events_count}'
# Returns null for historical data; will return values for newly-ingested ledgersStatus: DEPLOYED — working
Added a query in HandleNetworkStats to fetch protocol_version from the latest ledger in ledgers_row_v2 via the unifiedReader (tries bronze hot first, then bronze cold).
Files changed:
obsrvr-lake/stellar-query-api/go/network_stats.go— Added protocol_version query block inHandleNetworkStats, before fee stats fetch
Current behavior: Returns 25 (correct for testnet).
Verification:
curl -H "$AUTH" "$BASE/silver/stats/network" | jq '.ledger.protocol_version'
# Returns: 25Status: DEPLOYED — working
Replaced hardcoded 5.0 with a real computation: queries the last 100 ledgers from bronze, computes EXTRACT(EPOCH FROM MAX(closed_at) - MIN(closed_at)) / (COUNT(*) - 1), rounded to 2 decimal places.
Files changed:
obsrvr-lake/stellar-query-api/go/network_stats.go— Added avg close time query block; added"log"and"math"imports
Current behavior: Returns 5.0 — this is the real computed value (Stellar testnet genuinely averages ~5 second ledger closes: 495 seconds across 99 gaps = 5.0s/ledger).
Technical note: The bronze hot buffer currently has no ledger data (flushed to cold), so the query falls through to bronze cold which succeeds. A non-fatal error log is emitted for the hot schema miss.
Verification:
curl -H "$AUTH" "$BASE/silver/stats/network" | jq '.ledger.avg_close_time_seconds'
# Returns: 5 (real computed, not hardcoded)Status: DEPLOYED — code works, data empty (pipeline dependency)
Added a cold bronze fallback in HandleSorobanStats: when silver contract_data_current returns 0 entries, it tries querying contract_data_snapshot_v1 from bronze cold storage using contract_durability column and COUNT(DISTINCT key_hash).
Files changed:
obsrvr-lake/stellar-query-api/go/handlers_fee_stats.go— Added cold fallback block after the silver state query
Current behavior: Returns 0 for both persistent and temporary entries. Both silver contract_data_current and bronze cold contract_data_snapshot_v1 are empty. This is a pipeline/data dependency — the code path is correct and will work once data flows through.
Verification:
curl -H "$AUTH" "$BASE/silver/stats/soroban" | jq '.state'
# Returns: {"persistent_entries": 0, "temporary_entries": 0}Status: NOT IMPLEMENTED — pipeline dependency only, no code gap
The handler already has a fallback path that returns observed functions from contract_invocations_raw. The gap is missing data in contract_metadata / contract_creations_v1, not missing code. Requires either a backfill run or transformer update to populate.
-
Historical parquet files lack new columns: Existing cold storage parquet files for
ledgers_row_v2were written beforesoroban_op_count,total_fee_charged,contract_events_countexisted. These returnnull. Only newly-flushed data will have values. A backfill could rewrite historical parquet files if needed. -
Bronze hot buffer empty for ledgers: The hot buffer flushes ledger data to cold quickly. The
current_sequencein network stats shows0because it comes fromMAX(last_modified_ledger) FROM accounts_currentin the silver hot reader, and accounts_current may be empty. Consider also pullingcurrent_sequencefrom bronzeledgers_row_v2as a fallback. -
Soroban state entries (pipeline):
contract_data_snapshot_v1in both hot and cold bronze is empty. The silver transformer needs to either read from cold bronze or the ingester needs to retain this data longer in hot. -
Contract metadata (pipeline):
contract_metadatatable needs population via backfill tool or transformer update.
withobsrvr/stellar-query-api:latest(Docker Hub)- Nomad job version 22, deployment
4a877bcf— successful force_deploytimestamp in Nomad job:2026-03-07T01:30:00Z