Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

max_log_files can‘ work #2937

Open
laiyi55 opened this issue Apr 15, 2024 · 7 comments · May be fixed by #2966
Open

max_log_files can‘ work #2937

laiyi55 opened this issue Apr 15, 2024 · 7 comments · May be fixed by #2966

Comments

@laiyi55
Copy link

laiyi55 commented Apr 15, 2024

Bug Report

Version

tracing = "0.1.40"
tracing-subscriber = {version = "0.3.18", features = ['env-filter', 'local-time']}
tracing-appender = "0.2.3"

Platform

macos、Android

Description

I found max_log_files not work

let file_appender = RollingFileAppender::builder()
            .rotation(Rotation::MINUTELY) // rotate log files once every hour
            .max_log_files(2)
            .filename_prefix("box_tracing") // log file names will be prefixed with `myapp.`
            .filename_suffix("log") // log file names will be suffixed with `.log`
            .build(file_path) // try to build an appender that stores log files in `/var/log`
            .expect("initializing rolling file appender failed");

I try to find reason, and I found remove file only write->refresh_writer->prune_old_logs.

impl io::Write for RollingFileAppender {
    fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
        let now = self.now();
        let writer = self.writer.get_mut();
        if let Some(current_time) = self.state.should_rollover(now) {
            let _did_cas = self.state.advance_date(now, current_time);
            debug_assert!(_did_cas, "if we have &mut access to the appender, no other thread can have advanced the timestamp...");
            self.state.refresh_writer(now, writer);
        }
        writer.write(buf)
    }

    fn flush(&mut self) -> io::Result<()> {
        self.writer.get_mut().flush()
    }
}

So can we add 'prune_old_logs' to 'Inner::new'? If write can't make file rolling, and file can't remove. I hope we can trim file when init tracing. By the way, can we set file size? I found some issue, but no code use

@kaffarell
Copy link
Contributor

I tried your example just now and it works perfectly. Creates a new file every minute and deletes the old one, so that there are always only two files.

In which case does your file rolling 'not work'? Even if it somehow fails, I think it's unwise to start deleting files without having rolled over.

Regarding the rolling over at a specific file size, there are a few PRs open: #2904 and #2497 both of which have not been merged yet!

@laiyi55
Copy link
Author

laiyi55 commented Apr 16, 2024

I agree when every minute we write log, creates a new file every minute and deletes the old one
but it is my situation:
I set roll file every day and max 2 files, but my progress start on 6:00 am and close on 7:00 am.So every day my progress start it will create new file, and when I write log, should_rollover will return false so refresh_writer is not running and prune_old_logs is not running. My files size will be over 2

@kaffarell
Copy link
Contributor

The thing is that we first call prune_old_logs and then create_writer (which creates the new file). This means that there are three files until the next rollover happens.
I get that this can be confusing, but I think this is the correct behavior. If you set max_files it is the max log files that you currently have and I don't think it should include the file to which we write right now. Maybe we should improve the documentation on this point.

@laiyi55
Copy link
Author

laiyi55 commented Apr 16, 2024

I have different idea about it that we first call prune_old_logs and then create_writer
prune_old_logs is in write or make_writer,but create_writer is in Inner::new(which call RollingFileAppender::builder().build())
So I think first call create_writer and then prune_old_logs?

@laiyi55
Copy link
Author

laiyi55 commented Apr 16, 2024

It is my test:

        let (_file_guard, _console_guard) = box_tracing::BoxTracing::new().init("/Users/laiyi/AndroidStudioProjects/box_core/", true);
        tracing::debug!("start");
pub struct BoxTracing {}

impl BoxTracing {
    pub fn new() -> BoxTracing {
        BoxTracing {}
    }

    pub fn init(&self, file_path: &str, is_debug: bool) -> (WorkerGuard, WorkerGuard) {
        let local_time = OffsetTime::new(
            UtcOffset::from_hms(8, 0, 0).unwrap(),
            format_description!(
                "[year]-[month]-[day] [hour]:[minute]:[second].[subsecond digits:3]"
            ),
        );

        let file_appender = RollingFileAppender::builder()
            .rotation(Rotation::MINUTELY) // rotate log files once every hour
            .max_log_files(2)
            .filename_prefix("box_tracing") // log file names will be prefixed with `myapp.`
            .filename_suffix("log") // log file names will be suffixed with `.log`
            .build(file_path) // try to build an appender that stores log files in `/var/log`
            .expect("initializing rolling file appender failed");
        let (file_non_blocking, _file_guard) = tracing_appender::non_blocking(file_appender);

        let file_subscriber = tracing_subscriber::fmt::layer()
            .with_ansi(false)
            .with_writer(file_non_blocking)
            .with_timer(local_time.clone());

        let (console_non_blocking, _console_guard) = tracing_appender::non_blocking(ConsoleWriter);
        let console_subscriber = tracing_subscriber::fmt::layer()
            .with_ansi(false)
            .with_writer(console_non_blocking)
            .with_timer(local_time.clone());

        let level = if is_debug {
            tracing::Level::DEBUG
        } else {
            tracing::Level::INFO
        };
        let subcribe = tracing_subscriber::registry()
            .with(EnvFilter::from_default_env().add_directive(level.into()))
            .with(file_subscriber)
            .with(console_subscriber);
        tracing::subscriber::set_global_default(subcribe).unwrap();

        return (_file_guard, _console_guard);
    }
}

struct ConsoleWriter;

impl std::io::Write for ConsoleWriter {
    fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
        let buf_len = buf.len();
        let string = String::from_utf8_lossy(&buf); 

        box_log("BoxLog".to_string(), string.to_string());
        Ok(buf_len)
    }

    fn flush(&mut self) -> std::io::Result<()> {
        Ok(())
    }
}

#[cfg(target_os = "macos")]
pub fn box_log(tag: String, format: String) {
    println!("{} {}", tag, format);
}

#[cfg(target_os = "android")]
pub fn box_log(tag: String, format: String) {
    unsafe {
        __android_log_write(
            LogPriority::DEBUG as _,
            CString::new(tag.clone()).unwrap().as_ptr(),
            CString::new(format.clone()).unwrap().as_ptr(),
        );
    }
}

every minute I run this progress, the file will never remove

@kaffarell
Copy link
Contributor

kaffarell commented Apr 22, 2024

Oh, you are right. Though one could argue this is a weird use-case...
When creating a new RollingFileAppender we instantly try to open the current file (prefix + time) and if that doesn't work we create one. What if, when we fail to open the file at startup, we do nothing and let the FileAppender rollover normally on the next log call. This would allow the old files to get cleared.

@laiyi55
Copy link
Author

laiyi55 commented Apr 23, 2024

But it is normal case on App and I try prune_old_logs at last it does work.By the way, entry.create is not supported on Android, so I change sort by file_name. Do you have any better idea?
files.sort_by_key(|p| p.file_name());

kaffarell added a commit to kaffarell/tracing that referenced this issue May 10, 2024
The prune_old_logs function only gets called when we rollover the
appender. The problem is that at startup we create an initial log file
without rolling over, if e.g., the executable then exits before rolling
over for the first time, we never call prune_old_logs.

E.g.: If we have a tracing-appender that rolls over every minute and we
have a max_files parameter of 2. Running the program (which prints a
single log line and exits) every minute, will result in a new log file
everytime without deleting the old ones.

Fixes: tokio-rs#2937
@kaffarell kaffarell linked a pull request May 10, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants