Skip to content

WEAVE: client.flush() and client.finish()#2114

Draft
dbrian57 wants to merge 2 commits intomainfrom
weave/flush_finish
Draft

WEAVE: client.flush() and client.finish()#2114
dbrian57 wants to merge 2 commits intomainfrom
weave/flush_finish

Conversation

@dbrian57
Copy link
Contributor

@dbrian57 dbrian57 commented Feb 3, 2026

Description

Resolves DOCS-1508. Adds a section on how to use client.finish() and client.flushing() to avoid data loss.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 3, 2026

📚 Mintlify Preview Links

🔗 View Full Preview

📝 Changed (1 total)

📄 Pages (1)

File Preview
weave/guides/troubleshooting.mdx Troubleshooting

🤖 Generated automatically when Mintlify deployment succeeds
📍 Deployment: b704b21 at 2026-02-03 18:08:40 UTC

@github-actions
Copy link
Contributor

github-actions bot commented Feb 3, 2026

🔗 Link Checker Results

All links are valid!

No broken links were detected.

Checked against: https://wb-21fd5541-weave-flush-finish.mintlify.app

**Available methods:**

- `client.flush()`: Simple, silent flushing. (recommended for worker processes and CI environments)
- `client.finish()`: Includes progress feedback with a progress bar or status callbacks (useful for interactive scripts)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

technically yes, but we also expose weave.finish which is probably nicer, not sure how we should include that info

pass
```

The following example demonstrates how to use `client.flush()` with a multiprocessing application:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

second example feels a bit redundant but I get it, maybe we can have like tabs here?


Weave performs network uploads in background threads to minimize impact on your application's performance. However, when using worker processes like Celery, multiprocessing, or other task queue systems, the worker process may exit before background threads finish uploading traces, causing trace data to be lost.

To prevent data loss in worker processes, call `client.flush()` or `client.finish()` before the worker task completes. This ensures all background uploads complete before the process exits.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is one other major benefit to calling this method, it keeps the user process open longer, and ensures that we retain maximum upload speed in the background. when workers or the user process ends, without calling this flush/finish, and there is still work to do in the background, uploads are forced by the python ThreadExecutor to run essentially serially, instead of in parallel. This can massively slow down the script completing. not sure how much of that you want to use, but we should probably mention the perf piece at some point

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, i see that this stuff is already mentioned above, okay probably no action needed

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should probably somehow reference the other section though maybe idk

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants