The MU forums have moved to WordPress.org

Optimising a single very large WPMU SQL database (32 posts)

  1. Farms2
    Member
    Posted 18 years ago #

    Phew, well it's been some time and I've been pretty busy but necessity and interest compels me to delve back in here to let you know how I'm currently managing the edublogs.org database and ask for suggestions as to how I can do it better.

    At the moment it's a 6.5 Gig individual database with some hundreds of thousands of tables in there at last count and is running together with the rest of WPMU on a dedicated new dual core server with 4Gig RAM.

    Here are the stats as they stood about a bit back: http://incsub.org/blog/2006/edublogsorg-the-first-year-stats

    Since we shifted to this server though we've had some performance & stability issues. For example, over the last week on two occasions MySQL has fallen over and all the blogs have been returned as white screens - this was only able to be fixed by a MySQL restart.

    In seeing how I can fix these we've done a few things, for example:

    - query_cache_size is at 32M
    - wait_timeout has been decreased to 2 (to sort out sleeping connections)
    - I've got the tech people to install eaccelerator http://eaccelerator.net/ as recommended in this excellent thread: http://mu.wordpress.org/forums/topic.php?id=1343&replies=15

    I guess my question is... what else can I do to optimise performance?

    Thanks in advance for any assistance, you seriously don't know how appreciated it is!

    Cheers, James

    [Also coming up: Eeeeek... I'm almost at 32,000 blogs and am a technical dunce... how do I get round this: http://mu.wordpress.org/forums/topic.php?id=1343&replies=15#post-9113 ]

  2. Farms2
    Member
    Posted 18 years ago #

    Anybody? Bueller?

  3. andrewbillits
    Member
    Posted 18 years ago #

    The only way to get around the Linux (typically EXT3) file system limit is to start splitting up the directory. wp.com uses a hash based system that basically allows them to distribute the directories over several servers. I think they would have to be using rsync to pull off the multiple datacenter bit but I could be wrong.

    I can think of about ten different ways to go about this but the method wp.com used would be my first choice.

  4. Farms2
    Member
    Posted 18 years ago #

    Thanks AB... any one of the other 9 wouldn't be able to be accomplished by one man, a copy of editplus2 and an obedient server maintenance crew... would it?

  5. andrewbillits
    Member
    Posted 18 years ago #

    welp, the *easiest* solution would be to break the directory up into say five directories labled a,b,c,d,e. Then all you would have to do is hack the code a bit to look at the blog_id and if it's between, oh say, 1 and 25,000 use directory 'a'. Blogs with an id between 25,001 and 50,000 would use directory 'b'.

    This would definetly be a "buying some time" type solution but it would get you by untill a proper method has been coded and tested.

    As for a proper solution, i'd definetly go the wp.com route and use hashes of blog_id. I *believe* they have 4096 directories but I may be wrong. Even so, you could get by with much less, so it wouldn't be that difficult and it could all stay on one server.

  6. quenting
    Member
    Posted 18 years ago #

    I'm also stuck because of the 32000 blogs limit right now. I've stopped advertising and I'm stitting around 27000 blogs until I find a solution.
    After a lot of investigation, I'm leaning towards solution based on apache mod_proxy working in reverse mode.
    The problems with only hashing the blog_id for directories, is that it doesn't solve the problems of running out the resources of the server (disk space, CPU, ram).

    Disk space can be solved with complex NFS stuff (and also network consuming, almost necessarily resulting in a VPN setup etc.).
    CPU and RAM on the DB side can be solved by trying to work out the mutliple DB system that's half written in wpmu.
    Solving performance problems on the php side would probably mean using a load balancer, but I don't want to start understanding the mess it would be with HTTPs set up to serve any request, each HTTP having on its file system the files of some of the blogs (but not necessarily the files for the blog that it's currently processing a request for), and connecting to a different DB on a different server, except for the main tables of course which are only located on one server.
    This sounds like a real mess and web of connections.

    The mod_proxy solution has many advantages:
    the idea is that based on the blog name, it will be hosted on one server. Everything, the files, the specific database tables, etc. on the same server.
    Too many blogs -> multiple servers.
    Then there is main server that receives all requests, and with a couple of mod_rewrite assertions, it's used as a reverse proxy and connects to the appropriate server based on the blog name, then returns the results to the end user, for who everything seem to be coming from the same place.
    A bit of DNS juggling is necessary to make all this work, but it also solves the 32000 folders issue.
    If you want to host more than 32000 blogs on one server, it's possible with a little more DNS juggling. Another advantage is that locally, wpmu works as a single instance (except for the main set of tables), which means you don't have to modify the code for storing images, and finding out which specific database to use.

    Anyway this is the simplest way I've found to make wpmu work on multiple servers, without having to mess with NFS etc. I'm currently experimenting with it, but I need to complete the separation of connections to specific DB and global DB.

    Regarding my current mysql config (I have a similar server to yours), here it is:

    [root@ns6098 root]# more /etc/my.cnf
    [mysqld]
    safe-show-database
    max_connections = 500
    key_buffer = 1024M
    myisam_sort_buffer_size = 32M
    join_buffer_size = 1M
    read_buffer_size = 1M
    sort_buffer_size = 2M
    table_cache = 500
    thread_cache_size = 256
    wait_timeout = 900
    tmp_table_size = 64M
    connect_timeout = 10
    max_allowed_packet = 32M
    max_connect_errors = 10
    read_rnd_buffer_size = 524288
    bulk_insert_buffer_size = 64M
    thread_concurrency = 4
    query_cache_limit = 4M
    query_cache_size = 512M
    query_cache_type = 1
    query_prealloc_size = 16384
    query_alloc_block_size = 16384
    skip-innodb
    skip-networking
    long_query_time=10
    log-slow-queries = /var/log/mysql/slow-query.log
    ft_min_word_len=3

    [mysqld_safe]
    open_files_limit = 8192

    [mysqldump]
    quick
    max_allowed_packet = 16M

    [myisamchk]
    ft_min_word_len=3
    key_buffer = 64M
    sort_buffer = 64M
    read_buffer = 16M
    write_buffer = 16M

    with these settings I stand around 3 gigs of RAM used. CPU is only a problem when WPMU somehow crashes. Randomly some blogs seem to be not responding anymore. I don't know what the users did on them that's messing up things, but I know that requests on those blogs stick the apache process in the 100% CPU area, until it's killed. Probably does it come from the specific version of WPMU I'm using, I need to upgrade to a more recent one but I've made many modifications to it and this is going to be some work.
    Also sometimes CPU hits 100% when a user has the good idea to define an RSS feed widget with his own blog feed. This seems to create an infinite loop that isn't checked by wordpress.

    Also, I've had *many problems* with eaccelerator and I wouldn't recommend it for WPMU.
    It kept crashing on caching kses.php on a random basis, bringing the whole apache down. I ended up installing APC instead. I had the same problem, but unlike EA, APC has the ability not to cache a specific file, so using that functionnality i was able to remove kses from cached files and since that moment I've never had any more problems with it.

    Anyway, it'd be great if other hosts reaching the dreaded 32000 blogs limit shared how they go through it.

  7. andrea_r
    Moderator
    Posted 18 years ago #

    "[mysqldump]
    quick
    max_allowed_packet = 16M"

    Okay, this is where I'm stuck. How can I take a backup when my limit is also set like this, but the db is bigger than that?

    "CPU and RAM on the DB side can be solved by trying to work out the mutliple DB system that's half written in wpmu."

    We've figured out part of this, namely it's been written for replication for multiple database *servers*. We'd get further, but the husband of mine has to sleep sometime. :-/

  8. Farms2
    Member
    Posted 18 years ago #

    Thanks for the excellent feedback y'all, if any other hosts are out there it'd be fantastic to hear their solutions... I'll look into changing to APC.

  9. Farms2
    Member
    Posted 18 years ago #

    Here's my config... there are some pretty huge difefences there, do you reckon I'd be better off changing them to quenting's:

    ===================
    # cat /etc/my.cnf
    [mysqld]
    set-variable = max_connections=5000
    set-variable = key_buffer=16M
    set-variable = myisam_sort_buffer_size=64M
    set-variable = join_buffer=1M
    set-variable = record_buffer=1M
    set-variable = sort_buffer=2M
    set-variable = table_cache=1024
    set-variable = thread_cache_size=128
    set-variable = wait_timeout=2
    set-variable = connect_timeout=10
    set-variable = max_allowed_packet=16M
    set-variable = max_connect_errors=100
    set-variable = max_user_connections=100
    set-variable = query_cache_size=32M
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock

    [mysqld_safe]
    err-log=/var/log/mysqld.log
    pid-file=/var/run/mysqld/mysqld.pid
    open_files_limit=819200

    [mysqldump]
    quick
    set-variable = max_allowed_packet=16M

    [myisamchk]
    set-variable = key_buffer=64M
    set-variable = sort_buffer=64M
    set-variable = read_buffer=16M
    set-variable = write_buffer=16M

  10. Farms2
    Member
    Posted 18 years ago #

    Interesting - my server admin installed APC and "Segmentation fault" errors stopped everything from appearing...

  11. quenting
    Member
    Posted 18 years ago #

    did you remove kses.php from being cached ?
    here's my line in the config file (also removes a problem file with vbulletin):

    apc.filters = "(class_bbcode_alt.php)|(kses.php)"

  12. quenting
    Member
    Posted 18 years ago #

    ===================
    # cat /etc/my.cnf
    [mysqld]
    set-variable = max_connections=5000

    >> this should not be bigger than the max apache clients in your system ever. 5000 sounds really high.

    set-variable = key_buffer=16M

    >> you definitely want to raise this. it's the memory allocated to mysql for storing indexes. improves speed greatly

    set-variable = thread_cache_size=128

    if you have a dual xeon supposedly 256 is better.

    >> set-variable = query_cache_size=32M

    depending on the memory you have available, you may want to raise this too (query cache). A query cache too small is updated very often and cached queries aren't used much, so if you leave it too low you'd probably rather just disable caching. On a wpmu type of site caching isn't great anyway.

  13. Farms2
    Member
    Posted 18 years ago #

    Quenting yerra legend :) Will try these and report back. Thankyou!

  14. andrewbillits
    Member
    Posted 18 years ago #

    I could of sworn there was a post detailing how to optimize MySQL for WPMU but I can't seem to find it anywhere. Maybe it was one of the posts we lost when the forums were upgraded.

  15. Farms2
    Member
    Posted 18 years ago #

    Apparently APC isn't compatible with Zend Optimizer...

    I haven't the faintest what kind of benefits Zend is bringing to the party, anyone?

    On the SQL front my server admin made these changes and things seem to be going pretty well:

    "We have set key_buffer to 50M and thread_cache_size to 256.
    Also we'll we have set query_cache_size to 64M as we should not disable caching at all but as cached queries aren't used much we should not raise this too much."

  16. SubWolf
    Member
    Posted 18 years ago #

    OK I had to join in on this one, especially as I'm investigating optimization of a decent-size WPMU setup (Lockergnome). ;-)

    Personal bitch: Sleeping connections are currently a real annoyance, the wait timeout is set to 600s because we've seen threads from other sites go a minute or two between queries. I know if someone 'stops' the page in their browser before rendering is complete it can cause PHP to leave the connection option, but this is way in excess of that. I'll be doing some investigating and hopefully reporting back.

    eAccelerator - I encountered opcode crashes too, but it can filter files to not be cached. I'm using...

    ---snip---
    eaccelerator.filter = "!*kses.php"
    ---snip---

    ... on V0.95.

    quenting, farms2 - Mostly good configs, a decent size key & query cache are essential for speeding things up, 5000 connections may be a little excessive though IMHO. Tweak those settings up as high as you can, bearing in mind that all-important equation:

    Max memory usage = key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections

    Gotta leave enough for that after the caches. ;-)

    andrea_r - in case you were serious, the max_allowed_packet is just the maximum size of a single query or log event that the server will receive. 16Mb is fine. :)

  17. quenting
    Member
    Posted 18 years ago #

    mmm the version of ea i had tried didn't allow a per-file filtering. anyway apc does a fair job too. I wonder what's the problem with this kses.php file though, the opcode crashes seem systematic on many different systems and with various versions of optimizers.

  18. Farms2
    Member
    Posted 18 years ago #

    I'm pretty happy with my config - the site's been running sweetly for a couple of months now (touch wood several times)

  19. zeug
    Member
    Posted 18 years ago #

    There was a discussion on wp-hackers a few days ago on optimisation, including for wp.com which might interest you.

  20. lunabyte
    Member
    Posted 18 years ago #

    Thanks zeug. Nice reference.

  21. quenting
    Member
    Posted 18 years ago #

    there's one thing in that discussion that's surprising me:

    >> 3. Swich your tables to InnoDB

    I've always beleived innodb to be slower than myisam. Anyone else surprised by this recommendation, or with an explanation why it would be faster ?

  22. lunabyte
    Member
    Posted 18 years ago #

    Don't quote me, it's been awhile since I've dug into it. But it's something to do with the way it handles bigger databases, that are queried frequently. (Duh Luke, I know... ) as compared to the way MyISAM handles requests. Also, InnoDB can lock individual rows, vice MyISAM that locks an entire table. Under a lower load, it isn't a big deal, but having the rest of the table still available during heavy loads helps a lot.

    [add]

    I found this reference as a comparison.
    It might have some useful info in it.

    http://www.phpdiary.org/articles/myisam-vs-innodb/

  23. quenting
    Member
    Posted 18 years ago #

    mmm well if locks are the issue, probably only the main tables should be made innodb right ? doesn't make much sense to make individual blog's tables anti-lock.

  24. lunabyte
    Member
    Posted 18 years ago #

    Actually, it would if the particular blog is pretty active.

    Since innodb only locks individual rows, where myisam locks the entire table.

    But the individual blog tables by far would seem (in most cases) of less priority to make innodb than the main global tables.

    As a note, innodb doesn't work with fulltext searches, so some plugins (relatedposts, search reloaded, etc) that add fulltext to the posts table won't function.

  25. Farms2
    Member
    Posted 17 years ago #

    Am going to kick this off again in relation to drmikes v. exciting mysql optimizer plugin: http://wpmudevorg.wordpress.com/project/Optimize-MySQL

    I've tried to run this on a relatively small site (700 blogs) but it was timing out so am not sure how it'll cope with 36K of the buggers over at edublogs.org.

    Would cronning something like this also completely stuff up my server every time it ran over several thousand blog?

    Oh, and one more thing - you have to be from the US or Canada to contribute to a Starbucks card drmike... got any alternatives?

  26. Ovidiu
    Member
    Posted 17 years ago #

    as I see the discussion at the beginning of the psot, started with optimizing the mysql server, I'll toss in a link to a good tool / tutorial: http://hackmysql.com/mysqlreport

  27. quenting
    Member
    Posted 17 years ago #

    optimizing tables (by this i mean run the optimize command) accross a wpmu install shouldn't bring that much help. Optimize table is a command that reorganizes tables content within mysql file to make rows accessible faster (same kind of philosophy as defragmentation). It is very useful to optimize large tables, that have often rows deleted/inserted. Most of the tables in a WPMU install are quite small (since they're only for one blog), and without many records going through the hassle of optimizing every single table is IMHO not worth the hassle. The only tables that are worth optimizing a bit are the global tables (even them again aren't that big / don't have many deletions in them but well).
    The only table I try to optimize often is a table in which I keep all blog statistics and that is quite large. Just my 2c.

  28. drmike
    Member
    Posted 17 years ago #

    Yes, you can cron it. I just haven't done so because so many hosts out there have different ways of doing file calls. Some let their hosts do a wget, some dont. Some allow lynx and some don't. That's a question best asked towards your host.

    That and I want to be emailed the results and I need to remember how to pipe it to email.

    I just hit the bookmark once a day for now. Takes all of 45 seconds to run. Doing it while I run this.

    With me, I had a lot of problems just being able to pull a backup which was why I was looking for an optimize script. I went down from over 400 megs a week ago to about 195 megs this morning between optimizing, going through manually and deleteing tables for non existant blogs, deleted the spam blogs and upgrading the stats plugin. (The older version of it was putting in 'empty' when it didn't know something. The author upgraded it not to just put in a blank space. That was a good 45 megs right there.) I have SHH access to my sites but I had to ask to get it. On my own hosting, I have less than 25 users who have it and 17 of them are my fall backs within our support forums and they still only have limited root. (The folks I contract out to our support do have it though although they have a policy where they have to log it and contact me when they do so.)

    I actually removed the Starbucks card number from the site. I had someone get burned on the wp.com forums by them. I do have on my wishlist (that I need to upgrade since I've gotten the books that I list) gift certificates from RPGNow if that works for you. If you go over to that site and do a search for the email address that I gave them: tdjcbe ( at ) gmail ( dot ) com you can pull up my wishlist. I had someone just by a couple books and have the staff just transfer them over.

    Oh, and thanks for thinking of me. :)

    hope this helps,
    -drmike

  29. andrea_r
    Moderator
    Posted 17 years ago #

    If you have soem prolific popular bloggers on your site, they get hit with spam. Then their tables bloat. They are the ones we look out for. (just to add our 5 cents (CAN$))

  30. drmike
    Member
    Posted 17 years ago #

    Stat program as well. I have blogs with stat tables over 5 megs. Gotta work out the code to delete stats over 3 months old soon.

About this Topic