0

background: it's 92G/20 CPU server with ubuntu 18.04

mysql Ver 15.1 Distrib 10.1.41-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2

issue:

root@z:# time mysql -uroot -pXXX -e "select now()" +---------------------+ | now() | +---------------------+ | 2019-09-25 14:10:24 | +---------------------+ real 0m22.267s user 0m0.000s sys 0m0.006s 

it takes 20 seconds to connect mysql.

IMPORTANT: it's not always like that, it happens when we have more visits. we use php-fpm and not PERSISTENT=true for PDO since that's localhost connection, it uses sockets (so that's not TCP or network issue).

usually it takes 0.0X seconds.

iostat:

 Linux 4.15.0-50-generic (fdb) 09/25/2019 _x86_64_ (20 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 48.22 0.00 17.26 0.79 3.08 30.65 Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn loop0 0.00 0.00 0.00 8 0 sdb 0.09 0.27 2.20 24628 204340 sda 735.43 1740.89 5578.76 161421385 517281936 

it seems alright.

mysql conf:

user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking skip-host-cache skip-name-resolve bind-address = 0.0.0.0 key_buffer_size = 128M max_allowed_packet = 32M thread_stack = 192K thread_cache_size = 32 wait_timeout = 60 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam_recover_options = BACKUP max_connections = 30000 max_connect_errors = 3000 #table_cache = 64 #thread_concurrency = 10 tmp_table_size = 512M max_heap_table_size = 512M table_open_cache = 4096 performance_schema = on innodb_buffer_pool_size=48G innodb_log_file_size=6144M innodb_buffer_pool_instances=20 innodb_purge_threads=8 innodb_flush_log_at_trx_commit=2 

we have skip-name-resolve so it's not DNS issue

mysql status sorry it's too large so I made it in gist: https://gist.github.com/fayland/97050f70cb258a41f1fa1bd3046df4d2

it would be great that someone can point us the direction to check. if you need any more info, we'll be glad to provide.

Thanks

4
  • Additional information request. Post on pastebin.com and share the links. Text results of: B) SHOW GLOBAL STATUS; after minimum 24 hours UPTIME C) SHOW GLOBAL VARIABLES; D) SHOW FULL PROCESSLIST; E) complete MySQLTuner report F) SHOW ENGINE INNODB STATUS; AND Optional very helpful information, includes - htop OR top OR mytop for most active apps, ulimit -a for a linux/unix list of limits, iostat -xm 5 3 for IOPS by device and core/cpu count, df -h for a linux/unix free space list by device, for server workload tuning analysis to provide suggestions. Commented Oct 10, 2019 at 0:49
  • @WilsonHauck gist.github.com/fayland/d0d637e65d7c283810f0cf53d68c8a95 probably that's not the right stats since the time now we don't have the issue. We'll see if we can capture when it happens. thanks Commented Oct 11, 2019 at 3:16
  • Thank you for posting requested info. Analysis in process. Due to out of town scheduled event, it may be my Wednesday before I post suggestions. I am in UTC -6 hours, what is your time zone? Commented Oct 11, 2019 at 11:48
  • Yes, it's alright. We're considering to move the stuff to cluster like percona xtradb cluster since it still happens a lot recently (every hour actually). We're in UTC+8 China. Thanks for your help. Commented Oct 12, 2019 at 13:50

1 Answer 1

1

Suggestions to consider for your my.cnf [mysqld] section

thread_cache_size=256 # from 32 for default with v 10.0.38 innodb_io_capacity=1900 # from 200 to enable higher IOPS to your SSD devices read_buffer_size_256K # from 128K to reduce handler_read_next RPS of 2,337,990 read_rnd_buffer_size=128K # from 256K to reduce handler_read_rnd_next RPS of 1,982,624 innodb_flushing_avg_loops=5 # from 30 to reduce delays and reduce innodb_buffer_pool_pages_dirty of 23,455 thread_concurrency=18 # from 10 to use more of your 20 cores innodb_log_buffer_size=2G # from 8M to support about 7 log minutes in RAM 

There are more global variables to be tuned for your instance. Disclaimer: I am the content author of website mentioned in my profile, Network profile where you may download FREE Utility Scripts, and consider additional analysis services.

2
  • @FaylandLam Any comments you could share? Positive/negative performance change? Commented Oct 26, 2019 at 15:42
  • a little better but the problem is not resolved. We're considering to move out of mariadb and try something else like percona server or postgresql. thanks Commented Oct 28, 2019 at 0:16

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.