10 Laravel Performance Tips I Learned the Hard Way
Practical performance optimization techniques for Laravel applications based on real-world experience scaling a production CRM.
Robert Fridzema
Fullstack Developer

Introduction
After 6 years of building and scaling a CRM system with Laravel, I've learned that performance optimization is both an art and a science. Our application grew from handling 50 users to over 2,000 daily active users, processing thousands of API requests per minute.
In this guide, I'll share the techniques that made the biggest difference - not theoretical best practices, but battle-tested strategies with real benchmarks from production. You'll learn how to identify bottlenecks, optimize database queries, implement effective caching, and configure your server stack for maximum throughput.
What you'll learn:
- How to profile your application before optimizing
- Detecting and eliminating N+1 query problems
- Database optimization with strategic indexes and query patterns
- Multi-layer caching strategies that actually work
- Queue optimization for background processing
- PHP and server configuration for production
- Real benchmarks with before/after comparisons
Let's dive in.
Profile First: Measure Before You Optimize
The biggest mistake I made early on was optimizing based on assumptions. "This query looks slow" is not the same as "this query takes 847ms and runs 1,200 times per hour." Always measure first.
Laravel Telescope
Telescope is invaluable for development and staging environments:
// config/telescope.php 'enabled' => env('TELESCOPE_ENABLED', true), // Only record slow queries (>50ms) 'watchers' => [ Watchers\QueryWatcher::class => [ 'enabled' => env('TELESCOPE_QUERY_WATCHER', true), 'slow' => 50, ], ],
Clockwork Integration
For production-safe profiling, I prefer Clockwork. It's lighter than Telescope and can be enabled per-request:
// Enable via middleware for specific routes Route::middleware(['clockwork'])->group(function () { Route::get('/api/reports', [ReportController::class, 'index']); });
Custom Query Monitoring
For production, implement targeted monitoring:
// app/Providers/AppServiceProvider.php public function boot(): void { if (config('app.log_slow_queries')) { DB::listen(function ($query) { if ($query->time > 100) { Log::channel('slow-queries')->warning('Slow query detected', [ 'sql' => $query->sql, 'bindings' => $query->bindings, 'time_ms' => $query->time, 'connection' => $query->connectionName, 'request_url' => request()->fullUrl(), 'user_id' => auth()->id(), ]); } }); } }
Benchmark result: After implementing query logging, we identified 23 queries over 100ms that we didn't know existed. Fixing them reduced average API response time from 340ms to 89ms.
N+1 Queries: The Silent Performance Killer
The N+1 problem is the most common performance issue I encounter in Laravel code reviews. It's also one of the easiest to fix once you understand it.
Detection with Laravel Query Detector
Install the beyondcode/laravel-query-detector package for development:
composer require beyondcode/laravel-query-detector --dev
This throws an exception when N+1 queries are detected, forcing you to fix them immediately.
The Problem Illustrated
// BAD: This executes N+1 queries // 1 query for projects + N queries for clients $projects = Project::all(); // Query 1: SELECT * FROM projects foreach ($projects as $project) { echo $project->client->name; // Query 2-N: SELECT * FROM clients WHERE id = ? } // With 100 projects, this runs 101 queries!
The Solution: Eager Loading
// GOOD: This executes exactly 2 queries $projects = Project::with('client')->get(); // Query 1: SELECT * FROM projects // Query 2: SELECT * FROM clients WHERE id IN (1, 2, 3, ...)
Nested Eager Loading
For complex relationships, use dot notation:
// Load project -> client -> contacts in 3 queries $projects = Project::with([ 'client.contacts', 'tasks.assignee', 'invoices', ])->get();
Conditional Eager Loading with whenLoaded
When building APIs, use whenLoaded to conditionally include relationships:
// app/Http/Resources/ProjectResource.php public function toArray($request): array { return [ 'id' => $this->id, 'name' => $this->name, 'client' => ClientResource::make($this->whenLoaded('client')), 'tasks_count' => $this->whenCounted('tasks'), 'total_hours' => $this->when( $this->relationLoaded('timeEntries'), fn() => $this->timeEntries->sum('hours') ), ]; }
Preventing N+1 in Blade Templates
Define default eager loads in your model:
// app/Models/Project.php protected $with = ['client']; // Always load client // Or use a global scope for specific contexts public function scopeWithRelations($query) { return $query->with(['client', 'tasks', 'invoices']); }
Benchmark result: A project listing page went from 247 queries to 4 queries, reducing load time from 2.3s to 180ms.
Database Optimization
Strategic Indexes with EXPLAIN
Before adding indexes, use EXPLAIN to understand your query plans:
EXPLAIN ANALYZE SELECT * FROM projects WHERE client_id = 42 AND status = 'active' ORDER BY created_at DESC;
Look for:
Seq Scanon large tables (needs index)- High
costvalues - Missing index suggestions
Creating Effective Indexes
// Migration for strategic indexes Schema::table('projects', function (Blueprint $table) { // Single column indexes for filtered columns $table->index('status'); $table->index('created_at'); // Composite index for common query patterns // Order matters! Most selective column first $table->index(['client_id', 'status', 'created_at']); // Partial index for active records (PostgreSQL) DB::statement('CREATE INDEX projects_active_idx ON projects (client_id) WHERE status = \'active\''); });
Query Optimization Patterns
Select only what you need:
// BAD: Selects all columns including large text fields $projects = Project::where('client_id', $clientId)->get(); // GOOD: Only select needed columns $projects = Project::select(['id', 'name', 'status', 'created_at']) ->where('client_id', $clientId) ->get(); // BETTER: Use a dedicated query class class ProjectListQuery { public function __invoke(int $clientId): Collection { return Project::query() ->select(['id', 'name', 'status', 'budget', 'created_at']) ->with(['client:id,name', 'manager:id,name']) ->where('client_id', $clientId) ->orderByDesc('created_at') ->get(); } }
Chunking for large datasets:
// BAD: Loads all records into memory Project::all()->each(fn($p) => $this->process($p)); // GOOD: Process in chunks Project::chunk(500, function ($projects) { foreach ($projects as $project) { $this->process($project); } }); // BETTER: Lazy collections for memory efficiency Project::lazy(500)->each(fn($p) => $this->process($p));
When to Use Raw Queries
Sometimes Eloquent adds unnecessary overhead. For complex reports or bulk operations, raw queries can be 5-10x faster:
// Complex aggregation - raw query is clearer and faster $revenue = DB::select(" SELECT DATE_TRUNC('month', invoices.created_at) as month, SUM(invoices.amount) as revenue, COUNT(DISTINCT projects.client_id) as unique_clients FROM invoices INNER JOIN projects ON invoices.project_id = projects.id WHERE invoices.status = 'paid' AND invoices.created_at >= ? GROUP BY DATE_TRUNC('month', invoices.created_at) ORDER BY month DESC ", [now()->subYear()]); // Bulk update - single query instead of N queries DB::table('projects') ->where('status', 'pending') ->where('created_at', '<', now()->subDays(30)) ->update(['status' => 'stale', 'updated_at' => now()]);
Benchmark result: Converting a monthly revenue report from Eloquent to raw SQL reduced execution time from 4.2s to 340ms.
Caching Strategies
Caching is where you can achieve the biggest performance gains, but it's also where things can go wrong if not implemented carefully.
Cache Tags for Organized Invalidation
// Cache with tags for surgical invalidation $projects = Cache::tags(['projects', "client:{$clientId}"]) ->remember("projects:client:{$clientId}", 3600, function () use ($clientId) { return Project::where('client_id', $clientId) ->with('tasks') ->get(); }); // Invalidate all projects for a specific client Cache::tags(["client:{$clientId}"])->flush(); // Invalidate all project caches Cache::tags(['projects'])->flush();
Model Caching Pattern
Create a reusable caching trait:
// app/Concerns/CachesQueries.php trait CachesQueries { public static function findCached(int $id): ?static { return Cache::tags([static::class]) ->remember( static::class . ":{$id}", config('cache.model_ttl', 3600), fn() => static::find($id) ); } public static function flushCache(): void { Cache::tags([static::class])->flush(); } protected static function bootCachesQueries(): void { static::saved(fn() => static::flushCache()); static::deleted(fn() => static::flushCache()); } } // Usage $project = Project::findCached(123);
Query Result Caching
For expensive queries that don't change frequently:
// app/Services/DashboardService.php class DashboardService { public function getStats(): array { return Cache::remember('dashboard:stats', 300, function () { return [ 'active_projects' => Project::where('status', 'active')->count(), 'pending_invoices' => Invoice::where('status', 'pending')->sum('amount'), 'tasks_due_today' => Task::whereDate('due_date', today())->count(), 'team_utilization' => $this->calculateUtilization(), ]; }); } public function invalidateStats(): void { Cache::forget('dashboard:stats'); } }
HTTP Caching Headers
For API endpoints, use HTTP caching:
// app/Http/Controllers/Api/ProjectController.php public function index(Request $request) { $projects = Project::with('client') ->where('user_id', auth()->id()) ->get(); $etag = md5($projects->toJson()); if ($request->header('If-None-Match') === $etag) { return response()->noContent(304); } return response() ->json(ProjectResource::collection($projects)) ->header('ETag', $etag) ->header('Cache-Control', 'private, max-age=60'); }
Benchmark result: Implementing multi-layer caching reduced database queries by 73% and average response time from 180ms to 45ms.
Queue Optimization
Moving heavy operations to background queues is essential for responsive applications. But queues themselves need optimization.
Job Batching for Bulk Operations
// Instead of dispatching 1000 individual jobs foreach ($users as $user) { SendMonthlyReport::dispatch($user); // 1000 job dispatches } // Use job batching use Illuminate\Bus\Batch; use Illuminate\Support\Facades\Bus; $jobs = $users->map(fn($user) => new SendMonthlyReport($user)); $batch = Bus::batch($jobs) ->then(function (Batch $batch) { Log::info("Monthly reports completed: {$batch->totalJobs} sent"); }) ->catch(function (Batch $batch, Throwable $e) { Log::error("Batch failed: {$e->getMessage()}"); }) ->finally(function (Batch $batch) { Cache::forget('monthly-reports-processing'); }) ->name('Monthly Reports - ' . now()->format('F Y')) ->allowFailures() ->dispatch();
Rate Limiting Jobs
Prevent overwhelming external APIs:
// app/Jobs/SyncToExternalCrm.php class SyncToExternalCrm implements ShouldQueue { use Dispatchable, InteractsWithQueue, Queueable, SerializesModels; public function middleware(): array { return [ new RateLimited('external-crm'), new WithoutOverlapping($this->client->id), ]; } } // app/Providers/AppServiceProvider.php RateLimiter::for('external-crm', function (object $job) { return Limit::perMinute(60); // Max 60 requests per minute });
Horizon Configuration for Production
// config/horizon.php 'environments' => [ 'production' => [ 'supervisor-default' => [ 'connection' => 'redis', 'queue' => ['default'], 'balance' => 'auto', 'minProcesses' => 2, 'maxProcesses' => 10, 'balanceMaxShift' => 1, 'balanceCooldown' => 3, 'tries' => 3, 'timeout' => 300, ], 'supervisor-high' => [ 'connection' => 'redis', 'queue' => ['high'], 'balance' => 'simple', 'processes' => 5, 'tries' => 1, 'timeout' => 60, ], 'supervisor-reports' => [ 'connection' => 'redis', 'queue' => ['reports'], 'balance' => 'simple', 'processes' => 2, 'tries' => 3, 'timeout' => 900, // 15 minutes for heavy reports ], ], ],
Benchmark result: With Horizon auto-scaling, we handle 5x traffic spikes without manual intervention. Job throughput increased from 500/min to 2,400/min.
PHP and Server Configuration
OPcache Configuration
OPcache is the single biggest PHP performance improvement:
; /etc/php/8.3/fpm/conf.d/10-opcache.ini opcache.enable=1 opcache.memory_consumption=256 opcache.interned_strings_buffer=32 opcache.max_accelerated_files=20000 opcache.validate_timestamps=0 ; Disable in production! opcache.revalidate_freq=0 opcache.save_comments=1 ; Required for annotations opcache.enable_file_override=1 opcache.jit=1255 ; Enable JIT compilation opcache.jit_buffer_size=128M
After deploying new code, clear OPcache:
// In your deployment script if (function_exists('opcache_reset')) { opcache_reset(); }
PHP-FPM Tuning
Calculate based on your server's available memory:
; /etc/php/8.3/fpm/pool.d/www.conf ; Dynamic process management pm = dynamic ; Available memory / average PHP process size ; 8GB server with ~50MB per process = 160 max pm.max_children = 100 pm.start_servers = 20 pm.min_spare_servers = 10 pm.max_spare_servers = 30 pm.max_requests = 1000 ; Restart workers after 1000 requests to prevent memory leaks ; Slow log for debugging slowlog = /var/log/php-fpm/slow.log request_slowlog_timeout = 5s
Nginx Optimizations
# /etc/nginx/sites-available/your-app.conf upstream php-fpm { server unix:/var/run/php/php8.3-fpm.sock; keepalive 16; } server { listen 443 ssl http2; server_name your-app.com; root /var/www/your-app/public; # Gzip compression gzip on; gzip_comp_level 5; gzip_types text/plain text/css application/json application/javascript text/xml application/xml; gzip_min_length 1000; # Static file caching location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff2?)$ { expires 1y; add_header Cache-Control "public, immutable"; access_log off; } # PHP handling location ~ \.php$ { fastcgi_pass php-fpm; fastcgi_keep_conn on; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; include fastcgi_params; # Timeouts fastcgi_read_timeout 300; fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; } # Security headers add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Content-Type-Options "nosniff" always; }
Benchmark result: After PHP-FPM and Nginx tuning, requests per second increased from 450 to 1,200 on the same hardware.
Real Benchmarks: Before and After
Here are actual measurements from our CRM application:
Project Listing Page
| Metric | Before | After | Improvement |
|---|---|---|---|
| Database queries | 247 | 4 | 98% reduction |
| Query time | 1,840ms | 45ms | 97% faster |
| Memory usage | 89MB | 12MB | 86% reduction |
| Response time | 2,340ms | 180ms | 92% faster |
Changes made: Eager loading, query optimization, response caching
Monthly Report Generation
| Metric | Before | After | Improvement |
|---|---|---|---|
| Execution time | 47s | 3.2s | 93% faster |
| Memory peak | 512MB | 64MB | 87% reduction |
| Database queries | 12,847 | 23 | 99.8% reduction |
Changes made: Raw SQL for aggregations, chunked processing, job batching
API Response Times (p95)
| Endpoint | Before | After | Improvement |
|---|---|---|---|
| GET /api/projects | 340ms | 45ms | 87% faster |
| GET /api/dashboard | 890ms | 120ms | 87% faster |
| POST /api/invoices | 450ms | 180ms | 60% faster |
Changes made: OPcache JIT, Redis caching, query optimization
Monitoring in Production
Performance optimization is ongoing. Here's how we monitor production:
Laravel Pulse
Laravel Pulse provides real-time performance monitoring:
// config/pulse.php 'recorders' => [ Recorders\SlowQueries::class => [ 'enabled' => true, 'threshold' => 100, // Log queries > 100ms ], Recorders\SlowJobs::class => [ 'enabled' => true, 'threshold' => 5000, // Log jobs > 5s ], Recorders\SlowRequests::class => [ 'enabled' => true, 'threshold' => 1000, // Log requests > 1s ], ],
Custom Metrics with Prometheus
// app/Http/Middleware/TrackMetrics.php class TrackMetrics { public function handle(Request $request, Closure $next) { $start = microtime(true); $response = $next($request); $duration = microtime(true) - $start; app('prometheus') ->getOrRegisterHistogram( 'http_request_duration_seconds', 'HTTP request duration', ['method', 'route', 'status'] ) ->observe($duration, [ $request->method(), $request->route()?->getName() ?? 'unknown', $response->status(), ]); return $response; } }
Alerting Rules
Set up alerts for performance regressions:
# alertmanager rules groups: - name: laravel-performance rules: - alert: HighResponseTime expr: histogram_quantile(0.95, http_request_duration_seconds) > 1 for: 5m labels: severity: warning annotations: summary: "p95 response time > 1s" - alert: HighDatabaseQueryTime expr: avg(db_query_duration_seconds) > 0.5 for: 5m labels: severity: critical
Conclusion
Performance optimization is a journey, not a destination. The key takeaways from 6 years of scaling Laravel:
- Always measure first. Use Telescope, Clockwork, or custom logging to identify actual bottlenecks.
- N+1 queries are everywhere. Install laravel-query-detector in development and fix them immediately.
- Database optimization has the highest ROI. Strategic indexes, optimized queries, and knowing when to use raw SQL make a massive difference.
- Cache strategically. Use tags for organized invalidation, cache at multiple layers, and always have a cache invalidation strategy.
- Queues are essential. Move anything that doesn't need to happen synchronously to a background job. Use Horizon for visibility.
- Server configuration matters. OPcache with JIT, properly tuned PHP-FPM, and optimized Nginx can double your throughput.
- Monitor continuously. Performance can regress silently. Use Pulse, Prometheus, or your preferred monitoring stack.
Start with the biggest bottleneck, measure the improvement, then move to the next one. Small, incremental improvements compound into massive gains over time.
Have questions about Laravel performance? Find me on GitHub or get in touch.
Related Articles
Real-Time Laravel: Reverb vs Pusher vs Soketi
A practical comparison of real-time broadcasting options for Laravel applications, with implementation examples and performance insights.

Laravel 12 Starter Kit: Testing, Static Analysis & Quality Gates
How to set up a modern Laravel 12 project with comprehensive testing, static analysis, and automated quality gates from day one.

Building Production Electron Apps with Vue 3 and Azure SSO
A comprehensive guide to building secure, enterprise-ready Electron applications with Vue 3, TypeScript, and Azure AD authentication.