Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 36 additions & 15 deletions .github/workflows/docker-s3-deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,26 +36,47 @@ jobs:
exit 1
fi

- name: Import GPG key
uses: crazy-max/ghaction-import-gpg@v6
with:
gpg_private_key: ${{ secrets.GPG_PRIVATE_KEY }}
passphrase: ${{ secrets.GPG_PASSPHRASE }}
- name: Install cosign
uses: sigstore/cosign-installer@v3

- name: Sign tron-docker.zip with GPG
- name: Sign tron-docker.zip with Sigstore cosign (keyless)
# Keyless signing using GitHub OIDC — no private keys to manage or leak.
# The signature is bound to this workflow's identity (repo, ref, commit SHA).
# Verification: cosign verify-blob --certificate tron-docker.zip.cert \
# --signature tron-docker.zip.sig \
# --certificate-identity-regexp "https://github.com/tronprotocol/tron-docker" \
# --certificate-oidc-issuer "https://token.actions.githubusercontent.com" \
# tron-docker.zip
run: |
gpg --detach-sign --armor tron-docker.zip
# This creates tron-docker.zip.asc (ASCII-armored signature)
cosign sign-blob tron-docker.zip \
--yes \
--output-signature tron-docker.zip.sig \
--output-certificate tron-docker.zip.cert

- name: Configure AWS Credentials
- name: Configure AWS Credentials (OIDC — no long-lived keys)
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ github.event.inputs.aws-region }} # Use input for region
# Uses GitHub OIDC provider to assume an IAM role with short-lived credentials.
# No static keys needed — credentials expire after the workflow run.
# Prerequisites:
# 1. Create an IAM OIDC identity provider for token.actions.githubusercontent.com
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this workflows actually not used yet, so just modify the file and configure later

# 2. Create an IAM role with trust policy allowing this repo:
# "Condition": {
# "StringEquals": {
# "token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
# "token.actions.githubusercontent.com:sub": "repo:tronprotocol/tron-docker:ref:refs/heads/main"
# }
# }
# 3. Attach S3 put-object policy scoped to the target bucket only
# 4. Delete the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY secrets from GitHub
role-to-assume: ${{ secrets.AWS_OIDC_ROLE_ARN }}
aws-region: ${{ github.event.inputs.aws-region }}

- name: Upload tron-docker.zip to S3
env:
BUCKET_NAME: ${{ github.event.inputs.bucket-name }}
VERSION: ${{ github.event.inputs.version }}
run: |
zip -r publish.zip tron-docker.zip tron-docker.zip.asc
aws s3 cp publish.zip s3://${{ github.event.inputs.bucket-name }}/package/publish-latest.zip
aws s3 cp publish.zip s3://${{ github.event.inputs.bucket-name }}/package/publish-${{github.event.inputs.version}}.zip
zip -r publish.zip tron-docker.zip tron-docker.zip.sig tron-docker.zip.cert
aws s3 cp publish.zip "s3://${BUCKET_NAME}/package/publish-latest.zip"
aws s3 cp publish.zip "s3://${BUCKET_NAME}/package/publish-${VERSION}.zip"
5 changes: 4 additions & 1 deletion conf/private_net_config_witness1.conf
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,10 @@ genesis.block = {
}

localwitness = [
# address TPL66VK2gCXNCD7EJg9pgJRfqcRazjhUZY
# IMPORTANT: This is a DEMO private key for local testing ONLY
# Address: TPL66VK2gCXNCD7EJg9pgJRfqcRazjhUZY
# WARNING: Replace with your own generated key for any real deployment
# NEVER use this key on mainnet with real funds
da146374a75310b9666e834ee4ad0866d6f4035967bfc76217c5a495fff9f0d0 # you must enable this value and the witness address are match.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace with YOUR_PRIVATE_KEY_HERE_64_CHARACTERS_HEXADECIMAL_STRING_EXAMPLE

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better not, because it will cause user cannot start the demo easily. I already mention it in related readme.

]

Expand Down
5 changes: 4 additions & 1 deletion conf/private_net_config_witness2.conf
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,10 @@ genesis.block = {
}

localwitness = [
# address TCjptjyjenNKB2Y6EwyVT43DQyUUorxKWi
# IMPORTANT: This is a DEMO private key for local testing ONLY
# Address: TCjptjyjenNKB2Y6EwyVT43DQyUUorxKWi
# WARNING: Replace with your own generated key for any real deployment
# NEVER use this key on mainnet with real funds
0ab0b4893c83102ed7be35eee6d50f081625ac75a07da6cb58b1ad2e9c18ce43 # you must enable this value and the witness address are match.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace with YOUR_PRIVATE_KEY_HERE_64_CHARACTERS_HEXADECIMAL_STRING_EXAMPLE

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above, for user quick-start

]

Expand Down
31 changes: 20 additions & 11 deletions conf/private_net_layout.toml
Original file line number Diff line number Diff line change
@@ -1,31 +1,40 @@
# SECURITY WARNING: This file contains sensitive configuration for remote node deployment.
# DO NOT commit actual credentials, SSH keys, or real hostnames to the repository.
# Use environment variables, .env files (added to .gitignore), or secure credential management systems.
# See: https://12factor.net/config
#
# Example configuration for private network layout:

# [[nodes]]
# node_ip = "192.168.1.1" # Remote node's IP
# node_ip = "127.0.0.1" # Remote node's IP
# node_directory = "/path/to/direcotry" # Remote node's working direcotry for node
# config_file = "/path/to/config" # Config file for remote node
# docker_compose_file =/path/to/config # Config docker-compose file for remote node
# docker_compose_file ="/path/to/config" # Config docker-compose file for remote node
# node_type = "fullnode/sr" # Fullnode or SR node
# ssh_port = 22
# ssh_user = "user1"
# ssh_password = "password1" # Optional; uncomment if using password auth
# # ssh_key = "/path/to/key1" # Optional; uncomment if using key auth

# [[nodes]]
# node_ip = "192.168.1.2" # Changed IP to demonstrate different nodes
# node_ip = "127.0.0.1" # Changed IP to demonstrate different nodes
# node_directory = "/path/to/directory"
# config_file = "/path/to/config"
# docker_compose_file =/path/to/config # Config docker-compose file for remote node
# docker_compose_file ="/path/to/config" # Config docker-compose file for remote node
# node_type = "fullnode/sr"
# ssh_port = 2222 # Custom SCP port for this node
# ssh_user = "user2"
# # No password or key; assumes SSH agent or pre-configured key


[[nodes]]
node_ip = "ec2-3-25-116-244.ap-southeast-2.compute.amazonaws.com"
node_directory = "/home/ubuntu/mytest"
config_file = "/Users/ubuntu/conf/private_net_config_others.conf"
docker_compose_file = "/Users/ubuntu/docker-compose.yml"
node_ip = "127.0.0.1" # Replace with your actual node IP or hostname
node_directory = "/path/to/tron-node" # Replace with your actual node directory
config_file = "/path/to/private_net_config.conf" # Replace with your actual config path
docker_compose_file = "/path/to/docker-compose.yml" # Replace with your actual docker-compose path
ssh_port = 22
ssh_user = "ubuntu"
# ssh_password = "password1"
ssh_key = "/Users/ubuntu/Downloads/test-ci.pem" # Optional; uncomment if using key auth
ssh_user = "ubuntu" # Replace with your actual SSH user
# ssh_password = "password" # Optional; uncomment if using password auth (NOT RECOMMENDED)
# ssh_key = "/path/to/your/private/key" # Optional; uncomment if using key auth
# SECURITY WARNING: Never commit actual SSH keys or private credentials to the repository!
# Use environment variables or secure credential management systems instead.
10 changes: 8 additions & 2 deletions metric_monitor/REMOTE_WRITE_WITH_THANOS.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,12 +145,18 @@ docker-compose -f ./docker-compose/docker-compose-alloy.yml up -d
The [Thanos Receive](https://thanos.io/tip/components/receive.md/#receiver) service implements the Prometheus Remote Write API. It builds on top of the existing Prometheus TSDB and retains its usefulness while extending its functionality with long-term-storage, horizontal scalability, and downsampling. Prometheus instances are configured to continuously write metrics to it. Thanos Receive exposes the StoreAPI so that Thanos Queriers can query received metrics in real-time.


First, deploy [Minio](https://github.com/minio/minio) for long-term metric storage. Minio offers S3-compatible object storage functionality, allowing Thanos Receive to upload TSDB blocks to storage buckets at 2-hour intervals. While this guide uses Minio, you can opt for any storage service from the [Thanos Supported Clients](https://thanos.io/tip/thanos/storage.md/#supported-clients) list. For long-term monitoring, we recommend implementing a retention policy on your storage service to efficiently manage historical metric data. For instance, you might configure an S3 lifecycle policy when using AWS to automatically remove metrics older than one year.
First, deploy [Minio](https://github.com/minio/minio) for long-term metric storage. Minio offers S3-compatible object storage functionality, allowing Thanos Receive to upload TSDB blocks to storage buckets at 2-hour intervals.

**⚠️ Important**: The MinIO configuration in this guide uses demo credentials (`minio`/`melovethanos`) for local testing only. For production deployments, use AWS S3 or other cloud storage services with proper IAM credentials, or generate strong unique credentials if using MinIO.

While this guide uses Minio, you can opt for any storage service from the [Thanos Supported Clients](https://thanos.io/tip/thanos/storage.md/#supported-clients) list. For long-term monitoring, we recommend implementing a retention policy on your storage service to efficiently manage historical metric data. For instance, you might configure an S3 lifecycle policy when using AWS to automatically remove metrics older than one year.

```sh
# Start Minio
# Start Minio (for local testing only)
docker-compose -f ./docker-compose/minio.yml up -d

# First set the MinIO alias with root credentials to enable bucket creation permissions
# Note: These are demo credentials - replace with your own in production
docker exec minio mc alias set local http://localhost:9000 minio melovethanos

# Then create the bucket
Expand Down
3 changes: 3 additions & 0 deletions metric_monitor/conf/bucket_storage.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,8 @@ config:
bucket: "test-thanos-001"
endpoint: "minio:9000" # for example: s3.ap-southeast-1.amazonaws.com for AWS S3 on region ap-southeast-1
insecure: true # True for local test using http instead of https
# ⚠️ DEMO CREDENTIALS FOR LOCAL TESTING ONLY ⚠️
# These match the MinIO demo credentials and should NEVER be used in production
# For production: Use AWS S3 with proper IAM roles or access keys
access_key: "minio"
secret_key: "melovethanos"
Loading
Loading