Hey, my book is finally available. Find it here.  Eloquent Beyond The Basics!

How to handle long-running jobs in Laravel

Jan 29, 2023

Long-running jobs can be difficult to work with, they can:

  • Be killed before they end
  • Be difficult to retry
  • Fail/Succeed depending on the input

Fortunately, there are ways to work around the issues with long-running jobs in Laravel. Let's explore a couple of solutions (the last one is the good one so keep reading).

The job

For this article let's use the example of a job that uploads all images from a blog post into an S3 bucket.

Here is the job in question:

class StorePostImages implements ShouldQueue
{
    public function __construct(public Post $post, public User $owner)
    {
    }
    
    public function handle()
    {
        foreach ($this->post->images as $image) {
            $content = file_get_content($image->url);
            Storage::disk('s3')->put(
                "images/{$this->post->id}/{$this->image->filename}",
                $content
            );
        }
        
        $owner->notify(new PostImagesStored($this->post));
    }
}

We would dispatch that job (from a controller for example), like this:

StorePostImages::dispatch($post, $request->user());

This job does two things. It stores the images of the post in S3 and after all images where successfully stored, it notifies the owner.

Depending on the number of images this might take a long time.

By default, Laravel kills jobs after 60 seconds 😱.

Since this might take longer, one approach is to modify the job's timeout.

Changing the timeout

To modify the job's timeout you can overwrite the $timeout property in your job class.

class StorePostImages implements ShouldQueue
{
            //👇 Making the timeout larger
    public $timeout = 120;

    public function __construct(public Post $post, public User $owner)
    {
    }

    public function handle()
    {
        foreach ($this->post->images as $image) {
            $content = file_get_contents($image->url);
            Storage::disk('s3')->put(
                "images/{$this->post->id}/{$this->image->filename}", 
                $content
            );
        }

        $owner->notify(new PostImagesStored($this->post));
    }
}

Keep in mind that you also have to change the retry_after property in your queue configuration to avoid duplicated jobs.

// config/queue.php
//...
        'redis' => [
            'driver' => 'redis',
            'connection' => 'default',
            'queue' => env('REDIS_QUEUE', 'default'),
            'retry_after' => 130, /// 👈 Should be bigger than the timeout
            'block_for' => null,
            'after_commit' => false,
        ],
//...

Laravel uses the $timeout to know how long the worker should take to process the job. After the time defined passes, Laravel will kill that worker.

retry_after refers to the time it the job will be retried, it doesn't matter if the worker is still processing the job or not, if the job isn't marked as finished, Laravel will release that job to be processed again by a different worker. This can cause a job to run two times instead of one.

Changing the timeout is a good approach but I don't like it for two reasons.

  • I have to modify the queue configuration which might affect other jobs.

  • What if the job takes more than 120 seconds? What in the future we come across a post with A LOT of images?.

Making the job smaller

For those reasons, instead of changing the timeout. why don't we make smaller jobs instead?.

Instead of pushing one big job for all the images in a post, we can push one job for each image.

The job looks like this:

class StoreImage implements ShouldQueue
{
    public function __construct(public Image $image, public $postId)
    {
    }

    public function handle()
    {
        // This is fast 👌 ⚡
        $content = file_get_contents($this->image->url);
        Storage::disk('s3')->put("images/{$this->postId}/", $content);
    }
}

This job takes one single image and stores it in S3. It doesn't take that much time to run.

We would add multiple jobs to the queue for each post, one for each image:

foreach ($post->images as $image) {
    StoreImage::dispatch($image, $post->id);
}

Great. We can now process all images without issues. But how can we notify the user that the images were stored correctly? Is that even possible?.

Batching jobs

YES, fortunately, Laravel has an amazing feature called job batching.

With job batching we can register a callback that executes after every job finishes successfully.

We just need to pass an array of jobs to Bus::batch and the callback.

$jobs = [];
foreach ($post->images as $image) {
    $jobs[] = new StoreImage($image, $post->id);
}

// Batching jobs 🥳
Bus::batch($jobs)->then(function (Batch $batch) use ($user, $post) {
    $user->notify(new PostImagesStored($post));
    // 👆 will be executed after all jobs finish successfully 
})->dispatch();

Now we don't have to worry about jobs timing out AND we can notify the user when all images are stored.