Skip to content

Releases: pingcap/tiup

v1.2.4

19 Nov 11:08

Choose a tag to compare

Fixes

  • Fix the issue that Pump & Drainer has different node id between tidb-ansible and TiUP (#903, @lucklove)
    • For the cluster imported from tidb-ansible, if the pump or drainer is restarted, it will start with a new node id
    • Risk of this issue: binlog may not work correctly after restart pump or drainer
  • Fix the issue that audit log may get lost in some special case (#879, #882, @9547)
    • If the user execute two commands one follows the other, and the second one quit in 1 second, the audit log of the first command will be overwirten by the second one
    • Risk caused by this issue: some audit logs may get lost in above case
  • Fix the issue that new component deployed with tiup cluster scale-out doesn't auto start when rebooting (#905, @9547)
    • Risk caused by this issue: the cluster may be unavailable after rebooting
  • Fix the issue that data directory of tiflash is not deleted if multiple data directories are specified (#871, @9547)
  • Fix the issue that node_exporter and blackbox_exporter not cleaned up after scale-in all instances on specified host (#857, @9547)
  • Fix the issue that the patch command will fail when try to patch dm cluster (#884, @lucklove)
  • Fix the issue that the bench component report Error 1105: client has multi-statement capability disabled (#887, @mahjonp)
  • Fix the issue that the TiSpark node can't be upgraded (#901, @lucklove)
  • Fix the issue that tiup-playground can't start TiFlash with newest nightly PD (#902, @lucklove)

Improvements

  • Ignore no tispark master error when listing clusters since the master node may be remove by scale-in --force (#920, @AstroProfundis)

v1.2.3

30 Oct 10:52

Choose a tag to compare

Fixes

  • Fix misleading warning message in the display command (#869, @lucklove)

v1.2.1

23 Oct 09:35

Choose a tag to compare

Risk Events

A critical bug that introduced in V1.0.0 had been fixed in v1.0.8.
if the user want to scale in some TiKV nodes with the command tiup cluster scale-in with tiup-cluster, TiUP may delete TiKV nodes by mistake, causing the TiDB cluster data loss
The root cause:

  1. while TiUP treats these TiKV nodes' state as tombstone by mistake, it would report an error that confuses the user.
  2. Then the user would execute the command tiup cluster display to confirm the real state of the cluster, but the display command also displays these TiKV nodes are in tombstone state too;
  3. what's worse, the display command will destroy tombstone nodes automatically, no user confirmation required. So these TiKV nodes were destroyed by mistake.

To prevent this, we introduce a more safe manual way to clean up tombstone nodes in this release.

Improvements

  • Introduce a more safe way to cleanup tombstone nodes (#858, @lucklove)
    • When an user scale-in a TiKV server, it's data is not deleted until the user executes a display command, it's risky because there is no choice for user to confirm
    • We have add a prune command for the cleanup stage, the display command will not cleanup tombstone instance any more
  • Skip auto-start the cluster before the scale-out action because there may be some damaged instance that can't be started (#848, @lucklove)
    • In this version, the user should make sure the cluster is working correctly by themselves before executing scale-out
  • Introduce a more graceful way to check TiKV labels (#843, @lucklove)
    • Before this change, we check TiKV labels from the config files of TiKV and PD servers, however, servers imported from tidb-ansible deployment don't store latest labels in local config, this causes inaccurate label information
    • After this we will fetch PD and TiKV labels with PD api in display command

Fixes

  • Fix the issue that there is datarace when concurrent save the same file (#836, @9547)
    • We found that while the cluster deployed with TLS supported, the ca.crt file was saved multi times in parallel, this may lead to the ca.crt file to be left empty
    • The influence of this issue is that the tiup client may not communicate with the cluster
  • Fix the issue that files copied by TiUP may have different mode with origin files (#844, @lucklove)
  • Fix the issue that the tiup script not updated after scale-in PD (#824, @9547)

Temporary tags for other components to use `errdoc-gen`

26 Oct 03:58

Choose a tag to compare

This is a tag for development usage, use v1.2.1 for production.

v1.2.0

29 Sep 08:47
1a4fbe7

Choose a tag to compare

New Features

Fixes

  • Fix tiup update --self results to tiup's binary file deleted (#816, @lucklove)
  • Fix per-host custom port for drainer not handled correctly on importing (#806, @AstroProfundis)
  • Fix the issue that help message is inconsistent (#758, @9547)
  • Fix the issue that dm not applying config files correctly (#810, @lucklove)
  • Fix the issue that playground display wrong TiDB number in error message (#821, @SwanSpouse)

Improvements

  • Automaticlly check if TiKV's label is set (#800, @lucklove)
  • Download component with stream mode to avoid memory explosion (#755, @9547)
  • Save and display absolute path for deploy directory, data dirctory and log directory to avoid confusion (#822, @lucklove)
  • Redirect DM stdout to log files (#815, @csuzhangxc)
  • Skip download nightly package when it exists (#793, @lucklove)

v1.1.2

11 Sep 12:35

Choose a tag to compare

Fixes

  • Fix the issue that tikv store leader count is not correct (#762)
  • Fix the issue that tiflash's data is not clean up (#768)
  • Fix the issue that tiup cluster deploy --help display wrong help message (#758)
  • Fix the issue that tiup-playground can't display and scale (#749)

v1.1.1

01 Sep 12:41

Choose a tag to compare

Fixes

  • Remove the username root in sudo command #731
  • Transfer the default alertmanager.yml if the local config file not specified #735
  • Only remove corresponed config files in InitConfig for monitor service in case it's a shared directory #736

v1.1.0

28 Aug 09:36
3988f6f

Choose a tag to compare

New Features

  • [experimental] Support specifying customized configuration files for monitor components (#712, @lucklove)
  • Support specifying user group or skipping creating a user in the deploy and scale-out stage (#678, @lucklove)
  • [experimental] Support rename cluster by the command tiup cluster rename <old-name> <new-name> (#671, @lucklove)

    Grafana stores some data related to cluster name to its grafana.db. The rename action will NOT delete them. So there may be some useless panel need to be deleted manually.

  • [experimental] Introduce tiup cluster clean command (#644, @lucklove):
    • Cleanup all data in specified cluster: tiup cluster clean ${cluster-name} --data
    • Cleanup all logs in specified cluster: tiup cluster clean ${cluster-name} --log
    • Cleanup all logs and data in specified cluster: tiup cluster clean ${cluster-name} --all
    • Cleanup all logs and data in specified cluster, excepting the prometheus service: tiup cluster clean ${cluster-name} --all --ignore-role prometheus
    • Cleanup all logs and data in specified cluster, expecting the node 172.16.13.11:9000: tiup cluster clean ${cluster-name} --all --ignore-node 172.16.13.11:9000
    • Cleanup all logs and data in specified cluster, expecting the host 172.16.13.11: tiup cluster clean ${cluster-name} --all --ignore-node 172.16.13.12
  • Support skipping evicting store when there is only 1 tikv (#662, @lucklove)
  • Support importing clusters with binlog enabled (#652, @AstroProfundis)
  • Support yml source format with tiup-dm (#655, @july2993)
  • Support detecting port conflict of monitoring agents between different clusters (#623, @AstroProfundis)

Fixes

  • Set correct deploy_dir of monitoring agents when importing ansible deployed clusters (#704, @AstroProfundis)
  • Fix the issue that tiup update --self may make root.json invalid with offline mirror (#659, @lucklove)

Improvements

  • Add advertise-status-addr for tiflash to support host name (#676, @birdstorm)

Release v1.0.9

03 Aug 11:21
41fbacf

Choose a tag to compare

tiup

  • Clone with yanked version #602
  • Support yank a single version on client side #602
  • Support bash and zsh completion #606
  • Handle yanked version when update components #635

tiup-cluster

  • Validate topology changes after edit-config #609
  • Allow continue editing when new topology has errors #624
  • Fix wrongly setted data_dir of tiflash when import from ansible #612
  • Support native ssh client #615
  • Support refresh configuration only when reload #625
  • Apply config file on scaled pd server #627
  • Refresh monitor configs on reload #630
  • Support posix style argument for user flag #631
  • Fix PD config incompatible when retrieving dashboard address #638
  • Integrate tispark #531 #621

Release v1.0.8

13 Jul 10:12
4276089

Choose a tag to compare

Risk Events

A critical bug that introduced in V1.0.0 had been fixed in v1.0.8.
if the user want to scale in some TiKV nodes with the command tiup cluster scale-in with tiup-cluster, TiUP may delete TiKV nodes by mistake, causing the TiDB cluster data loss
The root cause:

  1. while TiUP treats these TiKV nodes' state as tombstone by mistake, it would report an error that confuses the user.
  2. Then the user would execute the command tiup cluster display to confirm the real state of the cluster, but the display command also displays these TiKV nodes are in tombstone state too;
  3. what's worse, the display command will destroy tombstone nodes automatically, no user confirmation required. So these TiKV nodes were destroyed by mistake.

To prevent this, we introduce a more safe manual way to clean up tombstone nodes in this release.

  • Fix the bug that ctl working directory is different with TiUP (#589)
  • Introduce a more general way to config profile (#578)
  • cluster: properly pass --wait-timeout to systemd operations (#585)
  • Always match the newest store when matching by address (#579)
  • Fix init config with check config (#583)
  • Bugfix: patch can't overwrite twice (#558)
  • Request remote while local manifest expired (#560)
  • Encapsulate operation about meta file (#567)
  • Playground: fix panic if failed to start tiflash (#543)
  • Cluster: show message for impossible fix (#550)
  • Fix scale-in of tiflash in playground (#541)
  • Fix config of grafana (#535)