×
Home > Blog > The most common causes of slowdown and loss of performance for web applications developed in PHP/Mysql

The most common causes of slowdown and loss of performance for web applications developed in PHP/Mysql

written September 10 2024

The causes of slowdown are quite common and significant in many PHP and MySQL-based web applications. Here's a more detailed analysis of each of them and some specific solutions to address them:



1. Unoptimized queries using FILESORT without indexes



When MySQL uses FILESORT, it is essentially sorting the results of a query in memory (and potentially on disk if memory is insufficient), which can drastically slow down operations, especially for large tables. This often happens when appropriate indexes are missing on columns used in ORDER BY, GROUP BY, or WHERE clauses.


Solutions:



  • Adding appropriate indexes: ensure that the columns used in ORDER BY, GROUP BY, and WHERE have adequate indexes. The use of composite indexes can further improve performance when multiple columns are used in these clauses.

  • Use EXPLAIN for query profiling: utilize EXPLAIN to identify where FILESORT or TEMPORARY are being used. If a query shows these warnings, consider redesigning it.

  • Query optimization: review the query structure and check if you can reduce or eliminate sorting at the query level. Sometimes, complex sorting can be handled in memory by the PHP application if the amount of data is manageable.



2. Nested queries (queries inside a loop)



Running SQL queries inside a loop is a very common issue and can cause severe slowdowns, especially when the loop is large. This happens because each iteration of the loop might trigger a new database connection and an SQL operation, leading to thousands of queries for a single request.


Solutions:



  • Restructuring queries: if possible, try to move queries out of loops and restructure the code to execute a single more complex query that returns all the necessary data. For example, if you are fetching related data, use JOIN or IN rather than running a query for each loop item.

  • Pre-fetching data: you can run a larger query before the loop to retrieve all necessary data and then process the results in PHP. This reduces the number of database interactions.

  • Caching results: if a query is executed repeatedly with the same parameters, consider caching the results to reduce the number of database calls.



3. Slow or unresponsive external filesystem connections (e.g., FTP, Samba)



External filesystem connections such as FTP or Samba servers can cause significant slowdowns if they respond slowly or not at all. This issue is particularly relevant when the PHP application frequently needs to access external resources such as files or directories on these systems.


Solutions:



  • Caching data: cache data retrieved from external filesystems to reduce the frequency of access to these resources. This can be done in memory or on a local disk.

  • Timeouts and error handling: set appropriate timeouts and handle errors so that the application can recover or continue without blocking the entire process if an external filesystem is unresponsive.

  • Asynchronous operations on external filesystems: if possible, perform asynchronous operations when interacting with external filesystems. This allows the application to continue responding to other requests while waiting for the filesystem operation to complete.

  • Connection monitoring: implement a monitoring system that alerts you in case of high response times or connection errors with external filesystems, allowing you to quickly address the problem.


General recommendations:



  • Logging and monitoring: add a logging and monitoring system that allows you to track query performance and quickly identify issues.

  • Load testing: perform load testing to simulate real traffic situations and identify potential bottlenecks in the system.


Detailed Logging and Monitoring


A well-implemented logging and monitoring system is essential for identifying and resolving performance issues in a web application. This system should provide a clear view of how long individual SQL queries and other critical points within the script take, allowing you to quickly identify any bottlenecks.



  1. SQL Query Logging


It's crucial to monitor the execution time of each SQL query within the PHP script. This can be done with a logging system that records:



  • Executed query: the text of the executed SQL query.

  • Execution time: the time taken to execute the query, measured in milliseconds.

  • Timestamp: the exact moment the query was executed.

  • Context: additional information such as the user who executed the query, the page or script that generated it.


Implementation:



  • PHP PDO: if you are using PDO for database connections, you can easily measure query execution time by wrapping them in a code block that uses microtime(true) before and after execution.


$start_time = microtime(true);
$stmt = $pdo->prepare($sql);
$stmt->execute($params);
$end_time = microtime(true);
$execution_time = ($end_time - $start_time) * 1000; // Convert to milliseconds


// Log execution time
error_log("Query: $sql - Execution time: {$execution_time} ms");


 



  • Logging to a file or database: the collected data can be stored in a dedicated log file or a database table for later analysis. For example, you can write a log like:


error_log("[QUERY] {$sql} - [TIME] {$execution_time} ms - [TIMESTAMP] " . date('Y-m-d H:i:s'));



  1. Script Critical Points Logging


In addition to monitoring SQL queries, it's important to track the execution times of critical points within the PHP script, such as:



  • Connections to external resources: connections to external filesystems, third-party APIs, or other external services.

  • Execution of particularly heavy functions: if the script performs intensive operations such as image processing, handling large amounts of data, or complex calculations.

  • Start and end of the script: to monitor the overall execution time of the script.


Implementation:



  • Time measurement: use microtime(true) to record the time at the beginning and end of each critical point.


$start_time = microtime(true);


// Execute the critical operation
critical_function();


$end_time = microtime(true);
$execution_time = ($end_time - $start_time) * 1000; // Time in milliseconds


// Log execution time of the critical point
error_log("Critical operation - Execution time: {$execution_time} ms - [TIMESTAMP] " . date('Y-m-d H:i:s'));



  • Detailed logging: save these logs in a dedicated performance log file, separate from standard error logs, so they can be easily analyzed.



  1. Continuous analysis and monitoring


Once detailed logging is implemented, it's important to regularly analyze the collected data to identify:



  • Slow or inefficient queries: look for queries that take a long time to execute and evaluate if they can be optimized.

  • Critical points with high execution times: identify parts of the script that take the most time and consider optimizing the code or distributing the load across multiple servers.

  • Performance patterns over time: analyze performance over time to see if there are fluctuations related to specific events or traffic loads.


Advanced tools:



  • Log visualization: use tools like Kibana or Grafana to visualize logs graphically and more easily identify performance issues.

  • Real-time monitoring: implement solutions like New Relic, Datadog, or similar tools to monitor performance in real-time and receive automatic alerts in case of performance degradation.



  1. Load and stress testing


To complete the picture, regularly perform load tests to simulate real traffic and stress tests to understand how the application behaves under extreme load. This will help you identify any bottlenecks that might not emerge during normal operation.

This site use cookies. By navigating in this site you accept our cookie policy. Click here for more information. Accept cookies from this site