Recently I was troubleshooting an issue with a Peprl script that is used to backup and then restore a PostgreSQL database every night. The script creates a database, moves the current database, and then moves the new database it created into the place of the first database. I noticed that the restore database was not clearing out properly so I started to investigate the logs that are output from the script. I noticed the below error that started recently.
pg_restore: [custom archiver] out of memory
The server where the database is backed up is a backup server located on a VM so I started thinking it was really running out of memory. After investigation I realized what I had done. I recently brought up a new server that was taking the place of the original server that backed up to this VM. Well I had missed a cron entry that ran this Perl script which in return had two servers both trying to backup to the one VM server. So the error turns out to also display when you have a corrupted or non existent dump file.
So to make a long story short be aware that you can get an “out of memory” error with PostgreSQL’s pg_restore if the dump file is non existent or is corrupted.