AWS RDS error due to error adding column

We got a MySQL database on AWS RDS with the innodb engine, MySQL version 5.6.19.

When we try to add a column to the table, we get the error message below:

ERROR 1041 (HY000): Out of memory; check if mysqld or any other process uses all available memory; if not, you may need to use "ulimit" to allow mysqld to use more memory, or you can add extra swap space

script We run to modify the table below: ALTER TABLE mytablename ADD COLUMN temp_colume varchar (255) NULL AFTER temp_firstcolumn;

Our RDS is located on db.m3.2xlarge with 30 GB of memory: Our buffer size for indobod is DBInstanceClassMemory * 3/4 ​​~ = 24 GB

We can successfully recreate the table with the column changes already made to it, but when changing the tables we get an error.

Does anyone encounter the same issue?

+6
source share
2 answers

Recently, I have seen RDS failures change. AWS support recommended modifying the alter table statement to look like this:

 ALTER TABLE tbl ADD COLUMN abc varchar(123) AFTER zyx, ALGORITHM=COPY 

The secret is to add

 , ALGORITHM=COPY 

to the end like a job.

You can also backup the RDS instance https://dev.mysql.com/doc/refman/5.7/en/alter-table.html

+7
source

The error was fixed simply by rebooting our RDS. After the reboot, the memory increased by 1.5 GB, we had free memory ~ 3.5 GB, and now it is almost ~ 5 GB. I assume that RDS (OS) itself cached part of the memory, but still was a little confused why it gave an error message when there was 3.5 GB of free memory, and the table we were trying to change was only 16 KB.

Also, I find another similar problem. Link below: https://dba.stackexchange.com/questions/74432/mysql-rds-instance-eating-up-memory-and-swapping

+2
source

All Articles