Whatever website you are starting, whether it is an eCommerce site, news site or a WordPress blog, it may have crossed your mind to use WordPress. If you did, you will notice that with increase in popularity, your core WordPress capabilities may be insufficient to effectively handle all the incoming traffic. You will need to scale the site up in order to be able to handle the greater traffic volumes efficiently.
Scaling up a WordPress blog is in a number of ways, but this article will show you how to improve database performance by making a few small changes. As background, WordPress employs the MySQL database system, with which you should be familiar by now. If not, you can look up online resources that describe the main features and their employment by WordPress to run your site.
On a point of caution, we advise that your refrain from performing these database queries onto the server directly. Prior to beginning, ensure that you have made proper backups which you can restore should you be unsuccessful in performing the below database tweaks. For assistance in case of anything, you can contact remote DBA services at remotedba.com.
The WordPress CMS stores a large volume of data within the database for security and backup purposes. This information may not always be visible to you or any other user of the system e.g. information on revisions for every published or draft post, unpublished comments, dead links, categories, unused tags and media.
For a new site, this data may not have a significant effect in reducing WordPress blog performance, but as your website grows, the unnecessary information stored can clog up your database, slowing down the response to queries.
One way to prevent this as you move along is to turn off tracking of revisions for posts, which is the biggest contributor to the pool of data junk. However, if you will need to track revisions, then you can forego the previous suggestion and instead carry out a manual database cleanup every once in a while. There are resources on the Internet that can give step-by-step schemes of action for this.
Adding, deleting and adding more entries (pages, posts, comments etc.) continually may affect the order in which data gets stored within the database. New and old data may mix up, taking different sectors within the hard drive called fragments.
Each fragment makes up a logical data unit even though they have no meaning apart from each other; but together they make up the complete database. You can carry out fragmentation vertically, horizontally, hybrid or mixed. It helps to cut down indexing times.
Even though MySQL and the operating system monitor fragmentation during reading or writing of data, too much of it can increase the execution time for a command, slowing down the site. MySQL databases are easily defragmented using phpMyAdmin, which has a GUI for performing a number of database administrative tasks, including defragmentation.
The ALTER TABLEcommand is useful in defragmenting single tables. This command orders the rebuilding of the entire table from scratch, so that all data within the table is stored up sequentially and in the same location. For defragmentation of the entire database, perform an SQL dump and then drop all the tables. Restore the dumps to rebuild all tables from scratch.
When using WordPress, a request will be served initially by some web server (usually Apache), and then passed over to PHP for processing, extraction or manipulation of data within the database according to instruction and finally display of the result.
If there are too many incoming requests to the server requiring the same content for each request, it would be inefficient to perform similar processing actions each time a request comes along. This is where caching becomes handy.
You can cache data in a variety of ways, including database caching, which is our focus today. Caching databases can be in two ways: you may cache the results from SQL queries or whole tables. If you are not experienced, we recommend using a WordPress blog plugin for database caching, for example the W3 Total Cache, which is good for a number of caching procedures.
Alternatively, you may choose to cache your personalized queries. For instance, if you have a query to display the most active viewers/users and you run it often, you should not have to run it each time a new page is served from WordPress, especially since it does not change very frequently.
Instead, you can set up the query to run at specific intervals. A tool to help you with this is the Transients API, for which you can get tutorials on step-by-step usage.
If none of the above options gives you any tangible improvements on your website traffic handling, do not despair, there’s one more method guaranteed to help anyone. Scaling of databases is essential with high availability architecture. There are many techniques for scaling databases, and one of them is the master-slave replication technique.
As the name suggests, this technique involves creation of a single master and one or more slaves. Each slave contains a copy of the data within the master. Web applications read and/or write to the master, but slaves can only be read from. In case of failure of the master, none of the read operations feel the effect, but a new master will have to be appointed before write operations can be performed.
However, master-slave replication is not extremely efficient for systems or applications that run a lot of write procedures due to the increased chances of inconsistencies. For instance, if one slave does not fully reflect the latest master status through updating, you can get two different read values from different slaves.
A plugin that can help you with master-slave replication in WordPress is the HyperDB plugin, using HAProxy as your load balancer. Again, look up tutorials for step-by-step application of the plugin, and contact professionals is you run into any problems. Regardless of the provider, the plugin can similarly find application by different providers.