Background Jobs

Learn about the background job system in the Inventory Service, including scheduled tasks, event-driven processing, and queue management.

Job System Overview

The Inventory Service uses Laravel's queue system to handle background processing for time-intensive operations, scheduled tasks, and event-driven workflows. Jobs are processed asynchronously to maintain system responsiveness.

Queue Configuration

The service supports multiple queue drivers and connections:

Connection Driver Use Case Default Queue
database database Local development and testing default
rabbitmq rabbitmq Production event broadcasting default
redis redis High-performance caching default
sync sync Synchronous processing (development) N/A

Scheduled Jobs

CreateStockSnapshot

Automatically generates daily stock snapshots for inventory tracking and reporting.

Schedule

  • Frequency: Daily
  • Timezone: Europe/Budapest
  • Execution: Scheduled via console routes

Functionality

  • Aggregates quantities across all storage places (ACTIVE, PUFFER, SCRAP, RESERVED, PICKING, PACKING)
  • Groups data by product_id for comprehensive inventory summaries
  • Creates snapshot records in StorageUnitSnapshot table
  • Automatically purges snapshots older than 6 months
  • Calculates total quantities for each product across all storage types

Data Structure

Each snapshot record contains:

  • product_id - Product identifier
  • active_quantity - Available stock quantity
  • puffer_quantity - Buffer storage quantity
  • scrap_quantity - Damaged items quantity
  • reserved_quantity - Reserved for orders quantity
  • under_packing_quantity - Items being packed quantity
  • under_picking_quantity - Items being picked quantity

Event-Driven Jobs

ParcelProcessJob

Handles complex parcel processing workflows through the decision tree system.

Trigger Conditions

  • Dispatched from CheckFullStorageQuantity step (low priority queue with 500ms delay)
  • Dispatched from CheckExactLotNumbersQuantity step (high priority queue with 1s delay)
  • Uses RabbitMQ connection for distributed processing

Processing Flow

  • Executes decision tree steps starting from CHECK_PARCEL_PROCESS_METHOD
  • Maintains DecisionTreeStates throughout processing
  • Continues until END_DECISION_TREE step is reached
  • Wraps entire process in database transaction for consistency
  • Logs progress and errors for monitoring

Input Requirements

  • DecisionTreeStates - Current state of the decision tree
  • MoveStorageUnitDTO - Storage unit movement details

Broadcast Event Jobs

Jobs responsible for handling parcel lifecycle events and broadcasting changes to external systems.

ParcelDeletedEvent

Processes the deletion of parcels and returns reserved stock to active storage.

Processing Logic

  • Deletes waiting reservations associated with the parcel
  • Retrieves all reserved storage units for the parcel
  • Converts each reserved unit back to active storage
  • Uses DELETE_ORDER_PROCESS workflow for proper state transitions
  • Maintains complete transaction integrity

Decision Tree Integration

  • Creates ProcessDetailsDTO with DELETE_ORDER_PROCESS type
  • Sets origin as RESERVED and destination as ACTIVE_STORAGE
  • Executes full decision tree for each storage unit
  • Includes process reason: "Parcel deleted"

ParcelMovementEvent

Broadcasts parcel movement events to external systems via RabbitMQ.

Dispatch Configuration

  • Uses RabbitMQ connection for reliable message delivery
  • Queue name determined dynamically by QueueEvents helper
  • Accepts flexible data array for event payload

ParcelItemsCorrection

Handles correction events for parcel item discrepancies.

Queue Routing

  • Dispatched to 'parcels_queue' on RabbitMQ connection
  • Triggered by QueueEvents helper for parcel corrections
  • Uses ReservedStorageUnit model for data validation

Queue Management

Queue Priorities

Priority Queue Name Use Case Delay
High high Exact lot number processing 1 second
Low low Full storage quantity checks 500 milliseconds
Default default Standard background processing None
Specialized parcels_queue Parcel-specific corrections None

Connection Strategy

  • RabbitMQ: Production event broadcasting and inter-service communication
  • Database: Local development and internal job processing
  • Redis: High-performance caching and session management
  • Sync: Development mode for immediate execution

Error Handling & Monitoring

Transaction Management

  • All critical jobs wrapped in database transactions
  • Automatic rollback on exceptions
  • Comprehensive error logging with context
  • Failed job tracking in dedicated tables

Logging Strategy

  • Job start/completion events logged with parcel IDs
  • Error contexts include stack traces and relevant data
  • Progress tracking for long-running operations
  • Queue performance metrics available

Retry Configuration

Queue-specific retry policies:

  • Database Queue: 90 seconds retry timeout
  • Redis Queue: 90 seconds retry timeout
  • RabbitMQ: Configurable per exchange/queue

Job Dispatching Patterns

Decision Tree Integration

Jobs are commonly dispatched from decision tree steps:

// From CheckFullStorageQuantity step ParcelProcessJob::dispatch($currentState, $storageUnitDTO) ->onConnection('rabbitmq') ->onQueue('low') ->delay(now()->addMilliseconds(500)); // From CheckExactLotNumbersQuantity step ParcelProcessJob::dispatch($currentState, $storageUnitDTO) ->onQueue('high') ->delay(now()->addSeconds());

Event Broadcasting

Event-driven job dispatching through QueueEvents helper:

// Parcel movement events ParcelMovementEvent::dispatch($parcelEventData) ->onConnection('rabbitmq') ->onQueue($queueName); // Parcel corrections ParcelItemsCorrection::dispatch($dispatchArray) ->onConnection('rabbitmq') ->onQueue('parcels_queue');

Scheduled Execution

Daily snapshot creation via Laravel scheduler:

// In routes/console.php Schedule::call(function () { $snapShotInstance = new CreateStockSnapshot(); $snapShotInstance->handle(); })->daily()->timezone('Europe/Budapest');

Best Practices

Performance Optimization

  • Use appropriate queue priorities based on operation urgency
  • Implement delays to prevent system overload
  • Batch similar operations when possible
  • Monitor queue lengths and processing times

Data Integrity

  • Always wrap critical operations in database transactions
  • Validate input data before processing
  • Implement proper error handling and rollback mechanisms
  • Log sufficient context for debugging

Monitoring & Maintenance

  • Regular cleanup of old snapshot data (automated via job)
  • Monitor failed job rates and investigate patterns
  • Track processing times and optimize slow operations
  • Ensure queue workers are properly scaled

Integration Points

Decision Tree System

  • Jobs execute decision tree steps for complex business logic
  • State management maintained across asynchronous operations
  • Automatic step progression until completion

External Services

  • RabbitMQ for inter-service communication
  • Event broadcasting to notify dependent systems
  • Parcel status updates to courier services

Data Models

  • StorageUnitSnapshot for inventory tracking
  • ReservedStorageUnit for parcel management
  • WaitingReservation for queue management
  • Various storage unit models for state transitions