Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: workers stopped consuming job, sometime event loss #2570

Open
1 task done
theyashwantsoni opened this issue May 17, 2024 · 5 comments
Open
1 task done

[Bug]: workers stopped consuming job, sometime event loss #2570

theyashwantsoni opened this issue May 17, 2024 · 5 comments

Comments

@theyashwantsoni
Copy link

Version

1.84.0

Platform

NodeJS

What happened?

Sometime experiencing few jobs are not getting processed and sometime no jobs are getting processed, seems happening because of memory issue.
Screenshot 2024-05-17 at 8 56 30 PM
Happening just after the memory spike.

How to reproduce.

No response

Relevant log output

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
@theyashwantsoni theyashwantsoni added the bug Something isn't working label May 17, 2024
@manast
Copy link
Contributor

manast commented May 17, 2024

Is it version 1.84.0 ? thats pretty old... also we will need some code to reproduce the issue.

@theyashwantsoni
Copy link
Author

i have more than 200 type of jobs running in my system, and everything was fine even with the older version. This started happening recently and I haven't made any changes in the code.

Earlier I thought, due to unhandled exception node process is killed, but tried to reproduce and no luck.

@theyashwantsoni
Copy link
Author

import bullRedisConnection from "@/config/bullmq/redisConnection";
import { Job, Queue, QueueScheduler, Worker } from "bullmq";
import JobProcessor from "./jobProcessor";

export default class BullWorker{
    queueName: string
    worker: Worker
    constructor(priority:TaskPriority){
        this.queueName = priority
        this.initWorker()
    }

    private async initWorker(){
        const _ = new QueueScheduler(this.queueName, {connection: bullRedisConnection.getConnection()});
        this.worker = new Worker(this.queueName,async (job:Job) => {
                let jobProcessor = await new JobProcessor(job)
                let result = await jobProcessor.execute()
                return result
        },
        { 
            autorun: false,
            connection: bullRedisConnection.getConnection() ,
        })
    }

    public run(){
        this.worker.run()
    }
}

@theyashwantsoni
Copy link
Author

import { TaskState } from "@/enums/taskResult/taskState";
import { Job } from "bullmq";
import AdapterFactory from "./adapterFactory";
import JobProgressUpdateInterface from "./jobProgressUpdateInterface";

export default class JobProcessor implements JobProgressUpdateInterface{
    job: Job
    adapterFactory: AdapterFactory
    constructor(job:Job){
        this.job = job
        this.adapterFactory = new AdapterFactory(job.name, this)
    }

    async execute():Promise<TaskResultOutput> {
        try{
            let result = await this.adapterFactory.getAdapter().processTask(this.job.data)
            return JSON.stringify(result)
        }   
        catch(err){
            throw(err)
        }
        
            
        
    }


}

@manast
Copy link
Contributor

manast commented May 20, 2024

Unfortunatelly this information is not enough for us to be able to perform any further action. Maybe you could start by checking in which statuses the jobs are in right now, verify that all the workers are indeed connected to Redis and so on.

@manast manast added cannot reproduce and removed bug Something isn't working labels May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants