Skip to content

Conversation

@dingyi222666
Copy link
Member

Summary

This PR refactors the logger utility to make file writing and cleanup operations non-blocking by deferring them to the next event loop iteration.

Changes

  • Wrapped writeFile and log cleanup logic into an async writeAndCleanup function
  • Scheduled execution with setTimeout(0) to prevent blocking the main event loop
  • Added robust error handling for file stat and deletion operations
  • Improved cleanup logging to show total deleted count
  • Made file deletion failures non-fatal with graceful error handling

Benefits

  • Prevents I/O operations from blocking the main event loop
  • Improves application responsiveness during logging
  • More resilient cleanup process with better error handling

Move file writing and cleanup operations to async function executed with
setTimeout to prevent blocking the main event loop. This allows the logging
process to complete without waiting for disk I/O operations.

- Wrap writeFile and cleanup logic in writeAndCleanup async function
- Schedule execution with setTimeout(0) to defer to next event loop
- Add error handling for stat and unlink operations
- Track deleted count instead of logging each deletion
- Ignore failed file deletions gracefully
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @dingyi222666, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the logger utility to prevent file input/output operations from blocking the main application thread. By asynchronously handling log file writing and the cleanup of old log files, the application's responsiveness is improved, and the log management process becomes more robust with graceful error handling for file system interactions.

Highlights

  • Non-blocking I/O: File writing and log cleanup operations are now non-blocking, deferred to the next event loop iteration to improve application responsiveness.
  • Enhanced Error Handling: Robust error handling has been added for file stat and deletion operations, making potential failures non-fatal and more resilient.
  • Improved Cleanup Logging: The log cleanup process now includes a count of successfully deleted old log files in the debug output.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 28, 2026

Caution

Review failed

The pull request is closed.

Walkthrough

该 PR 重构了日志文件清理逻辑,引入异步助手函数 writeAndCleanup,通过 setTimeout 延迟执行。改进了对七天前日志文件的逐个删除处理,增加了文件级错误捕获和删除间隔,提升了清理的鲁棒性。

Changes

内聚 / 文件 变更摘要
日志清理重构
packages/core/src/utils/logger.ts
将同步日志写入和文件清理拆分为异步 writeAndCleanup 助手函数,通过 setTimeout 调度延迟执行。引入对单个日志文件的 stat 和 unlink 操作的 try/catch 包装,逐个安全删除超过七天的日志文件。添加删除间隔(setTimeout(0)) 实现控制权让步,最后输出统计消息替代单次删除日志。

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 分钟

Possibly related PRs

  • #688 — 同样修改 packages/core/src/utils/logger.ts,实现自动删除七天前的日志文件,本 PR 在其基础上重构为异步处理并改进错误处理。

Poem

小兔整理旧日志,
异步处理更妥贴,
七日之前的事迹,
温柔删除无遗漏,
让出控制慢慢走。 🐰📝✨

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/logger-async

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@dingyi222666 dingyi222666 merged commit 75e95d8 into v1-dev Jan 28, 2026
2 of 3 checks passed
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the logging utility to perform file I/O operations in a non-blocking manner, which improves application responsiveness. However, a critical security concern exists due to the use of a predictable path in the shared temporary directory for log files, potentially leading to information disclosure or symbolic link attacks by local users. It is recommended to use secure temporary directory creation with restrictive permissions. Additionally, I've suggested enhancing error handling to prevent silent failures during log operations and refactoring the log cleanup logic to use Promise.all for improved performance and readability.

Comment on lines +87 to +89
setTimeout(() => {
writeAndCleanup().catch(() => undefined)
}, 0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The .catch(() => undefined) silently swallows all errors that might occur during file writing or cleanup, including critical errors like failing to write the log file. This could lead to silent data loss and make debugging difficult. It's better to log these errors to ensure that failures in the logging mechanism are visible.

    setTimeout(() => {
        writeAndCleanup().catch((error) => {
            logger.error(`[${tag}] Failed to write or clean up log files:`, error)
        })
    }, 0)

if (!file.startsWith('chatluna-log-') || !file.endsWith('.log')) {
continue
const writeAndCleanup = async () => {
await fs.promises.writeFile(logFile, output)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The application writes log data to a predictable file path in the system's shared temporary directory (/tmp/chatluna/logs/). This practice introduces two security risks:

  1. Information Disclosure: Files created in shared temporary directories are often world-readable by default. If the logs contain sensitive information or Personally Identifiable Information (PII), other local users on the system may be able to read them.
  2. Symbolic Link Attack: An attacker can create a symbolic link at the predictable log file path before the application creates the file. If the application has higher privileges, it may follow the symlink and overwrite an arbitrary sensitive file with the log content, leading to a Denial of Service or potential privilege escalation.

Remediation:

  • Use a secure method for creating temporary files, such as fs.mkdtemp to create a private directory with restrictive permissions (e.g., 0700).
  • Ensure that the log directory and files are created with restrictive permissions that only allow the application's user to read and write.
  • Consider allowing users to configure a dedicated, secure log directory outside of the shared temporary space.

Comment on lines +58 to 82
let deletedCount = 0

for (const file of files) {
if (!file.startsWith('chatluna-log-') || !file.endsWith('.log')) {
continue
}

const filePath = `${logDir}/${file}`
let stats: fs.Stats
try {
stats = await fs.promises.stat(filePath)
} catch {
continue
}

if (stats.mtimeMs < sevenDaysAgo) {
try {
await fs.promises.unlink(filePath)
deletedCount += 1
} catch {
// ignore failed deletions
}
await new Promise((resolve) => setTimeout(resolve, 0))
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current implementation for cleaning up old log files processes them sequentially using a for...of loop with await. For a large number of log files, this can be inefficient. The use of await new Promise((resolve) => setTimeout(resolve, 0)) to yield to the event loop can also be improved.

A more performant and idiomatic approach is to process files in parallel using Promise.all. This will issue all file system operations concurrently, making the cleanup process faster and removing the need for manual yielding.

        const deletionResults = await Promise.all(
            files.map(async (file) => {
                if (!file.startsWith('chatluna-log-') || !file.endsWith('.log')) {
                    return 0
                }

                const filePath = `${logDir}/${file}`
                try {
                    const stats = await fs.promises.stat(filePath)
                    if (stats.mtimeMs < sevenDaysAgo) {
                        await fs.promises.unlink(filePath)
                        return 1
                    }
                } catch (error) {
                    // Log errors for individual file operations but continue with others.
                    logger.warn(`[${tag}] Failed to process old log file ${filePath}:`, error)
                }
                return 0
            })
        )

        const deletedCount = deletionResults.reduce((sum, count) => sum + count, 0)

@dingyi222666 dingyi222666 deleted the fix/logger-async branch January 28, 2026 15:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant