0

I am hosting a WordPress website on a DigitalOcean droplet (1GB RAM). The website's MySQL database crashes occasionally, which causes the website to show "Error establishing database connection". Memory usage dropped around 2:40 am, indicating that this is when the database crashed. I checked the MySQL log file for that day, and the earliest entry was at 10:47 am. Here is the beginning of the log file:

2021-12-06T10:47:14.800977Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timest$ 2021-12-06T10:47:14.806192Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.36-0ubuntu0.18.04.1) starting as process 2810 ... 2021-12-06T10:47:14.819674Z 0 [Note] InnoDB: PUNCH HOLE support available 2021-12-06T10:47:14.819711Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2021-12-06T10:47:14.819716Z 0 [Note] InnoDB: Uses event mutexes 2021-12-06T10:47:14.819720Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier 2021-12-06T10:47:14.819723Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2021-12-06T10:47:14.819727Z 0 [Note] InnoDB: Using Linux native AIO 2021-12-06T10:47:14.820551Z 0 [Note] InnoDB: Number of pools: 1 2021-12-06T10:47:14.823342Z 0 [Note] InnoDB: Using CPU crc32 instructions 2021-12-06T10:47:14.825847Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M 2021-12-06T10:47:14.826246Z 0 [ERROR] InnoDB: mmap(137428992 bytes) failed; errno 12 2021-12-06T10:47:14.826258Z 0 [ERROR] InnoDB: Cannot allocate memory for the buffer pool 2021-12-06T10:47:14.826262Z 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error 2021-12-06T10:47:14.826270Z 0 [ERROR] Plugin 'InnoDB' init function returned error. 2021-12-06T10:47:14.826274Z 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 2021-12-06T10:47:14.826278Z 0 [ERROR] Failed to initialize builtin plugins. 2021-12-06T10:47:14.826282Z 0 [ERROR] Aborting 2021-12-06T10:47:14.832237Z 0 [Note] Binlog end 2021-12-06T10:47:14.832297Z 0 [Note] Shutting down plugin 'CSV' 2021-12-06T10:47:14.832572Z 0 [Note] /usr/sbin/mysqld: Shutdown complete 

Based on the log file, it appears MySQL is running out of memory. However, memory usage for the droplet was steady around 73%, until the database crash around 2:40 am, when it dropped down to 32%. It seems to have plenty of memory available, so why is it crashing?

EDIT As requested, here are the contents of my MySQL config files:

/etc/mysql/conf.d/mysql.cnf

[mysql] 

/etc/mysql/conf.d/mysqldump.cnf

[mysqldump] quick quote-names max_allowed_packet = 16M 

/etc/mysql/mysql.conf.d/mysqld.cnf

# # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. # Here is entries for some specific programs # The following values assume you have at least 32M ram [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer_size = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover-options = BACKUP #max_connections = 100 #table_open_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error log - should be very few entries. # log_error = /var/log/mysql/error.log # # Here you can see queries with especially long duration #slow_query_log = 1 #slow_query_log_file = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem 

/etc/mysql/mysql.conf.d/mysqld_safe_syslog.cnf

[mysqld_safe] syslog 
10
  • you should have some variables setting in /etc/my.cnf.d/server.cnf related to memory and/or innodb that do not fit with your database size or memory. can you show us these variables and your database size ? Commented Dec 12, 2021 at 8:35
  • @dominix There is no /etc/my.cnf.d/server.cnf directory or file. The closest thing I can find is /etc/mysql/mysql.conf.d/mysqld.cnf. The server is running Ubuntu 18.04, if that helps. Commented Dec 12, 2021 at 8:39
  • Can you add a longer snippet of the log file to the question? This log file looks like the MySQL server is just starting up, which doesn't match your description. Commented Dec 12, 2021 at 8:41
  • @dominix The database size is 43.5 MB. Commented Dec 12, 2021 at 8:46
  • @TeroKilkanen According to memory usage history, it appears the database crashed around 2:40 am. However, the log file for that day has no entries before 10:47 am. This is the closest I could get to the time of the database crash. I will expand my question to include all entries from the start of the log file. Commented Dec 12, 2021 at 8:49

2 Answers 2

2

After reviewing kernel log in the chat, the reason for MySQL crash was out of memory situation.

Dec 6 02:47:13 kernel: [341799.228400] Out of memory: Kill process 23566 (mysqld) score 197 or sacrifice child Dec 6 02:47:13 kernel: [341799.229866] Killed process 23566 (mysqld) total-vm:1168576kB, anon-rss:198536kB, file-rss:0kB, shmem-rss:0kB 

During that time there were lots of Apache2 processes active. This means that increasing traffic causes increased memory consumption. As a result, kernel decided to kill MySQL to free up memory.

One can run mysqltuner to analyse MySQL configuration. It gives recommendations, which can help to reduce MySQL memory consumption.

However, increasing traffic can still cause same problems. Therefore more sustainable solution is to increase memory available for the droplet.

3
  • I just checked Google Analytics for this website, and there was only one page view on December 6 (the website went down on December 6 between 2:40-2:50 am). This doesn't seem to be matching up with what you saw in the kernel log. Any idea why? Commented Dec 12, 2021 at 10:49
  • 1
    Google Analytics only shows traffic from web browsers that execute the Google Analytics JS code. In addition, the internet has lots of bot traffic. You can see that traffic in your web server\s access.log Commented Dec 12, 2021 at 11:10
  • Okay, makes sense. Must have been a lot of bot traffic all at once to overwhelm a server that was consistently using only 73% memory? Do you think it was a DDoS attack? Is there some way to block this bot traffic in the future? Commented Dec 13, 2021 at 6:35
1

1GB of RAM is very tiny these days. The server might survive if you set innodb_buffer_pool_size = 50M. Recommendations for some percentage of RAM do not apply for such a small RAM size.

There is a risk that 50M will be too small. Please provide your my.cnf so we can look for other settings to shrink. Some likely ones:

max_connections = 10 key_buffer_size = 10M temp_table_size = 10M max_heap_table_size = 10M table_open_cache = 100 query_cache_size = 0 max_allowed_packet = 8M key_buffer_size = 10M max_connections = 10 

Yes, MySQL/MariaDB will run in a tiny machine, but tuning is necessary.

If Apache is running on the same server, lower its MaxRequestWorkers to 10. And check the configuration of anything else running in the same [virtual] server.

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.