-
-
Notifications
You must be signed in to change notification settings - Fork 44
refactor(core): make log file operations non-blocking #700
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Move file writing and cleanup operations to async function executed with setTimeout to prevent blocking the main event loop. This allows the logging process to complete without waiting for disk I/O operations. - Wrap writeFile and cleanup logic in writeAndCleanup async function - Schedule execution with setTimeout(0) to defer to next event loop - Add error handling for stat and unlink operations - Track deleted count instead of logging each deletion - Ignore failed file deletions gracefully
Summary of ChangesHello @dingyi222666, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly refactors the logger utility to prevent file input/output operations from blocking the main application thread. By asynchronously handling log file writing and the cleanup of old log files, the application's responsiveness is improved, and the log management process becomes more robust with graceful error handling for file system interactions. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Caution Review failedThe pull request is closed. Walkthrough该 PR 重构了日志文件清理逻辑,引入异步助手函数 Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 分钟 Possibly related PRs
Poem
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the logging utility to perform file I/O operations in a non-blocking manner, which improves application responsiveness. However, a critical security concern exists due to the use of a predictable path in the shared temporary directory for log files, potentially leading to information disclosure or symbolic link attacks by local users. It is recommended to use secure temporary directory creation with restrictive permissions. Additionally, I've suggested enhancing error handling to prevent silent failures during log operations and refactoring the log cleanup logic to use Promise.all for improved performance and readability.
| setTimeout(() => { | ||
| writeAndCleanup().catch(() => undefined) | ||
| }, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The .catch(() => undefined) silently swallows all errors that might occur during file writing or cleanup, including critical errors like failing to write the log file. This could lead to silent data loss and make debugging difficult. It's better to log these errors to ensure that failures in the logging mechanism are visible.
setTimeout(() => {
writeAndCleanup().catch((error) => {
logger.error(`[${tag}] Failed to write or clean up log files:`, error)
})
}, 0)| if (!file.startsWith('chatluna-log-') || !file.endsWith('.log')) { | ||
| continue | ||
| const writeAndCleanup = async () => { | ||
| await fs.promises.writeFile(logFile, output) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The application writes log data to a predictable file path in the system's shared temporary directory (/tmp/chatluna/logs/). This practice introduces two security risks:
- Information Disclosure: Files created in shared temporary directories are often world-readable by default. If the logs contain sensitive information or Personally Identifiable Information (PII), other local users on the system may be able to read them.
- Symbolic Link Attack: An attacker can create a symbolic link at the predictable log file path before the application creates the file. If the application has higher privileges, it may follow the symlink and overwrite an arbitrary sensitive file with the log content, leading to a Denial of Service or potential privilege escalation.
Remediation:
- Use a secure method for creating temporary files, such as
fs.mkdtempto create a private directory with restrictive permissions (e.g.,0700). - Ensure that the log directory and files are created with restrictive permissions that only allow the application's user to read and write.
- Consider allowing users to configure a dedicated, secure log directory outside of the shared temporary space.
| let deletedCount = 0 | ||
|
|
||
| for (const file of files) { | ||
| if (!file.startsWith('chatluna-log-') || !file.endsWith('.log')) { | ||
| continue | ||
| } | ||
|
|
||
| const filePath = `${logDir}/${file}` | ||
| let stats: fs.Stats | ||
| try { | ||
| stats = await fs.promises.stat(filePath) | ||
| } catch { | ||
| continue | ||
| } | ||
|
|
||
| if (stats.mtimeMs < sevenDaysAgo) { | ||
| try { | ||
| await fs.promises.unlink(filePath) | ||
| deletedCount += 1 | ||
| } catch { | ||
| // ignore failed deletions | ||
| } | ||
| await new Promise((resolve) => setTimeout(resolve, 0)) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation for cleaning up old log files processes them sequentially using a for...of loop with await. For a large number of log files, this can be inefficient. The use of await new Promise((resolve) => setTimeout(resolve, 0)) to yield to the event loop can also be improved.
A more performant and idiomatic approach is to process files in parallel using Promise.all. This will issue all file system operations concurrently, making the cleanup process faster and removing the need for manual yielding.
const deletionResults = await Promise.all(
files.map(async (file) => {
if (!file.startsWith('chatluna-log-') || !file.endsWith('.log')) {
return 0
}
const filePath = `${logDir}/${file}`
try {
const stats = await fs.promises.stat(filePath)
if (stats.mtimeMs < sevenDaysAgo) {
await fs.promises.unlink(filePath)
return 1
}
} catch (error) {
// Log errors for individual file operations but continue with others.
logger.warn(`[${tag}] Failed to process old log file ${filePath}:`, error)
}
return 0
})
)
const deletedCount = deletionResults.reduce((sum, count) => sum + count, 0)
Summary
This PR refactors the logger utility to make file writing and cleanup operations non-blocking by deferring them to the next event loop iteration.
Changes
writeFileand log cleanup logic into an asyncwriteAndCleanupfunctionsetTimeout(0)to prevent blocking the main event loopBenefits