Releases: pingcap/tiup
v1.2.4
Fixes
- Fix the issue that Pump & Drainer has different node id between tidb-ansible and TiUP (#903, @lucklove)
- For the cluster imported from tidb-ansible, if the pump or drainer is restarted, it will start with a new node id
- Risk of this issue: binlog may not work correctly after restart pump or drainer
- Fix the issue that audit log may get lost in some special case (#879, #882, @9547)
- If the user execute two commands one follows the other, and the second one quit in 1 second, the audit log of the first command will be overwirten by the second one
- Risk caused by this issue: some audit logs may get lost in above case
- Fix the issue that new component deployed with
tiup cluster scale-outdoesn't auto start when rebooting (#905, @9547)- Risk caused by this issue: the cluster may be unavailable after rebooting
- Fix the issue that data directory of tiflash is not deleted if multiple data directories are specified (#871, @9547)
- Fix the issue that
node_exporterandblackbox_exporternot cleaned up after scale-in all instances on specified host (#857, @9547) - Fix the issue that the patch command will fail when try to patch dm cluster (#884, @lucklove)
- Fix the issue that the bench component report
Error 1105: client has multi-statement capability disabled(#887, @mahjonp) - Fix the issue that the TiSpark node can't be upgraded (#901, @lucklove)
- Fix the issue that tiup-playground can't start TiFlash with newest nightly PD (#902, @lucklove)
Improvements
- Ignore no tispark master error when listing clusters since the master node may be remove by
scale-in --force(#920, @AstroProfundis)
v1.2.3
v1.2.1
Risk Events
A critical bug that introduced in V1.0.0 had been fixed in v1.0.8.
if the user want to scale in some TiKV nodes with the command tiup cluster scale-in with tiup-cluster, TiUP may delete TiKV nodes by mistake, causing the TiDB cluster data loss
The root cause:
- while TiUP treats these TiKV nodes' state as
tombstoneby mistake, it would report an error that confuses the user. - Then the user would execute the command
tiup cluster displayto confirm the real state of the cluster, but thedisplaycommand also displays these TiKV nodes are intombstonestate too; - what's worse, the
displaycommand will destroy tombstone nodes automatically, no user confirmation required. So these TiKV nodes were destroyed by mistake.
To prevent this, we introduce a more safe manual way to clean up tombstone nodes in this release.
Improvements
- Introduce a more safe way to cleanup tombstone nodes (#858, @lucklove)
- When an user
scale-ina TiKV server, it's data is not deleted until the user executes adisplaycommand, it's risky because there is no choice for user to confirm - We have add a
prunecommand for the cleanup stage, the display command will not cleanup tombstone instance any more
- When an user
- Skip auto-start the cluster before the scale-out action because there may be some damaged instance that can't be started (#848, @lucklove)
- In this version, the user should make sure the cluster is working correctly by themselves before executing
scale-out
- In this version, the user should make sure the cluster is working correctly by themselves before executing
- Introduce a more graceful way to check TiKV labels (#843, @lucklove)
- Before this change, we check TiKV labels from the config files of TiKV and PD servers, however, servers imported from tidb-ansible deployment don't store latest labels in local config, this causes inaccurate label information
- After this we will fetch PD and TiKV labels with PD api in display command
Fixes
- Fix the issue that there is datarace when concurrent save the same file (#836, @9547)
- We found that while the cluster deployed with TLS supported, the ca.crt file was saved multi times in parallel, this may lead to the ca.crt file to be left empty
- The influence of this issue is that the tiup client may not communicate with the cluster
- Fix the issue that files copied by TiUP may have different mode with origin files (#844, @lucklove)
- Fix the issue that the tiup script not updated after
scale-inPD (#824, @9547)
Temporary tags for other components to use `errdoc-gen`
This is a tag for development usage, use v1.2.1 for production.
v1.2.0
New Features
- Support tiup env sub command (#788, @lucklove)
- Support TiCDC for playground (#777, @leoppro)
- Support limiting core dump size (#817, @lucklove)
- Support using latest Spark and TiSpark release (#779, @lucklove)
- Support new cdc arguments
gc-ttlandtz(#770, @lichunzhu) - Support specifing custom ssh and scp path (#734, @9547)
Fixes
- Fix
tiup update --selfresults to tiup's binary file deleted (#816, @lucklove) - Fix per-host custom port for drainer not handled correctly on importing (#806, @AstroProfundis)
- Fix the issue that help message is inconsistent (#758, @9547)
- Fix the issue that dm not applying config files correctly (#810, @lucklove)
- Fix the issue that playground display wrong TiDB number in error message (#821, @SwanSpouse)
Improvements
- Automaticlly check if TiKV's label is set (#800, @lucklove)
- Download component with stream mode to avoid memory explosion (#755, @9547)
- Save and display absolute path for deploy directory, data dirctory and log directory to avoid confusion (#822, @lucklove)
- Redirect DM stdout to log files (#815, @csuzhangxc)
- Skip download nightly package when it exists (#793, @lucklove)
v1.1.2
v1.1.1
v1.1.0
New Features
- [experimental] Support specifying customized configuration files for monitor components (#712, @lucklove)
- Support specifying user group or skipping creating a user in the deploy and scale-out stage (#678, @lucklove)
- to specify the group: https://github.com/pingcap/tiup/blob/master/examples/topology.example.yaml#L7
- to skip creating the user:
tiup cluster deploy/scale-out --skip-create-user xxx
- [experimental] Support rename cluster by the command
tiup cluster rename <old-name> <new-name>(#671, @lucklove)Grafana stores some data related to cluster name to its grafana.db. The rename action will NOT delete them. So there may be some useless panel need to be deleted manually.
- [experimental] Introduce
tiup cluster cleancommand (#644, @lucklove):- Cleanup all data in specified cluster:
tiup cluster clean ${cluster-name} --data - Cleanup all logs in specified cluster:
tiup cluster clean ${cluster-name} --log - Cleanup all logs and data in specified cluster:
tiup cluster clean ${cluster-name} --all - Cleanup all logs and data in specified cluster, excepting the prometheus service:
tiup cluster clean ${cluster-name} --all --ignore-role prometheus - Cleanup all logs and data in specified cluster, expecting the node
172.16.13.11:9000:tiup cluster clean ${cluster-name} --all --ignore-node 172.16.13.11:9000 - Cleanup all logs and data in specified cluster, expecting the host
172.16.13.11:tiup cluster clean ${cluster-name} --all --ignore-node 172.16.13.12
- Cleanup all data in specified cluster:
- Support skipping evicting store when there is only 1 tikv (#662, @lucklove)
- Support importing clusters with binlog enabled (#652, @AstroProfundis)
- Support yml source format with tiup-dm (#655, @july2993)
- Support detecting port conflict of monitoring agents between different clusters (#623, @AstroProfundis)
Fixes
- Set correct
deploy_dirof monitoring agents when importing ansible deployed clusters (#704, @AstroProfundis) - Fix the issue that
tiup update --selfmay make root.json invalid with offline mirror (#659, @lucklove)
Improvements
- Add
advertise-status-addrfor tiflash to support host name (#676, @birdstorm)
Release v1.0.9
tiup
- Clone with yanked version #602
- Support yank a single version on client side #602
- Support bash and zsh completion #606
- Handle yanked version when update components #635
tiup-cluster
- Validate topology changes after edit-config #609
- Allow continue editing when new topology has errors #624
- Fix wrongly setted data_dir of tiflash when import from ansible #612
- Support native ssh client #615
- Support refresh configuration only when reload #625
- Apply config file on scaled pd server #627
- Refresh monitor configs on reload #630
- Support posix style argument for user flag #631
- Fix PD config incompatible when retrieving dashboard address #638
- Integrate tispark #531 #621
Release v1.0.8
Risk Events
A critical bug that introduced in V1.0.0 had been fixed in v1.0.8.
if the user want to scale in some TiKV nodes with the command tiup cluster scale-in with tiup-cluster, TiUP may delete TiKV nodes by mistake, causing the TiDB cluster data loss
The root cause:
- while TiUP treats these TiKV nodes' state as
tombstoneby mistake, it would report an error that confuses the user. - Then the user would execute the command
tiup cluster displayto confirm the real state of the cluster, but thedisplaycommand also displays these TiKV nodes are intombstonestate too; - what's worse, the
displaycommand will destroy tombstone nodes automatically, no user confirmation required. So these TiKV nodes were destroyed by mistake.
To prevent this, we introduce a more safe manual way to clean up tombstone nodes in this release.
- Fix the bug that ctl working directory is different with TiUP (#589)
- Introduce a more general way to config profile (#578)
- cluster: properly pass --wait-timeout to systemd operations (#585)
- Always match the newest store when matching by address (#579)
- Fix init config with check config (#583)
- Bugfix: patch can't overwrite twice (#558)
- Request remote while local manifest expired (#560)
- Encapsulate operation about meta file (#567)
- Playground: fix panic if failed to start tiflash (#543)
- Cluster: show message for impossible fix (#550)
- Fix scale-in of tiflash in playground (#541)
- Fix config of grafana (#535)