• Home »
  • Errors »
  • CriticalMailer ::Exception:: FATAL C53300 Msorry, too many clients already Fproc.c L276 RInitProcess

CriticalMailer ::Exception:: FATAL C53300 Msorry, too many clients already Fproc.c L276 RInitProcess

I was tasked with troubleshooting a Ruby Rufus Task Scheduler issue today so I started with reviewing the log files. Upon digging into the logs I noticed numerous errors which are expanded below. The primary error appeared to be a connection issue from Ruby to PostgreSQL as it was complaining about too many connections. While the server I was looking into is actively used in a development environment the connections limit of 100 should have been more than enough to handle the load. After looking into the issue more I found out that some large Postgres queries from within the Ruby code were causing the issue and once those were optimized things started working properly. Below I describe the errors noticed via the Ruby logs, steps taken to troubleshoot the errors, and final resolution.

Ruby Rufus Task Scheduler Logs Error:

CriticalMailer.deliver_successful_user_logins_daily()::Exception:: FATAL  C53300  Msorry, too many clients already        Fproc.c L276    RInitProcess

Ruby Rufus Task Scheduler Logs Error Details:

[DEV]CriticalMailer.deliver_successful_user_logins_daily()::Exception:: FATAL  C53300  Msorry, too many clients already        Fproc.c L276    RInitProcess
/usr/local/lib/ruby/gems/1.8/gems/postgres-pr-0.5.1/lib/postgres-pr/connection.rb:81:in `initialize'
/usr/local/lib/ruby/gems/1.8/gems/postgres-pr-0.5.1/lib/postgres-pr/connection.rb:55:in `loop'
/usr/local/lib/ruby/gems/1.8/gems/postgres-pr-0.5.1/lib/postgres-pr/connection.rb:55:in `initialize'
/usr/local/lib/ruby/gems/1.8/gems/postgres-pr-0.5.1/lib/postgres-pr/postgres-compat.rb:23:in `new'
/usr/local/lib/ruby/gems/1.8/gems/postgres-pr-0.5.1/lib/postgres-pr/postgres-compat.rb:23:in `initialize'
/home/web/project/vendor/rails/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb:814:in `connect'
/home/web/project/vendor/rails/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb:814:in `connect'
/home/web/project/vendor/rails/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb:251:in `initialize'
/home/web/project/vendor/rails/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb:37:in `new'
/home/web/project/vendor/rails/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb:37:in `postgresql_connection'
/home/web/project/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/connection_specification.rb:292:in `send'
/home/web/project/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/connection_specification.rb:292:in `connection='
/home/web/project/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/connection_specification.rb:260:in `retrieve_connection'
/home/web/project/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/connection_specification.rb:78:in `connection'
/home/web/project/vendor/rails/activerecord/lib/active_record/base.rb:2693:in `quoted_table_name'
/home/web/project/vendor/rails/activerecord/lib/active_record/base.rb:1469:in `construct_finder_sql'
/home/web/project/vendor/rails/activerecord/lib/active_record/base.rb:1341:in `find_every'
/home/web/project/vendor/rails/activerecord/lib/active_record/base.rb:536:in `find'
/home/web/project/app/models/compliance.rb:161:in `successful_user_logins'
/home/web/project/app/models/critical_mailer.rb:104:in `successful_user_logins_daily'
/home/web/project/vendor/rails/actionmailer/lib/action_mailer/base.rb:446:in `__send__'
/home/web/project/vendor/rails/actionmailer/lib/action_mailer/base.rb:446:in `create!'
/home/web/project/vendor/rails/actionmailer/lib/action_mailer/base.rb:439:in `initialize'
/home/web/project/vendor/rails/actionmailer/lib/action_mailer/base.rb:383:in `new'
/home/web/project/vendor/rails/actionmailer/lib/action_mailer/base.rb:383:in `method_missing'
/home/web/project/config/jobs/reporting.rb:65
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/jobs.rb:136:in `call'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/jobs.rb:136:in `trigger'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/scheduler.rb:331:in `call'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/scheduler.rb:331:in `trigger_job'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/scheduler.rb:331:in `initialize'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/scheduler.rb:331:in `new'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/scheduler.rb:331:in `trigger_job'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/jobs.rb:121:in `send'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/jobs.rb:121:in `trigger'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/jobs.rb:334:in `trigger_if_matches'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/jobqueues.rb:136:in `trigger_matching_jobs'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/jobqueues.rb:136:in `each'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/jobqueues.rb:136:in `trigger_matching_jobs'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/scheduler.rb:280:in `cron_step'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/scheduler.rb:266:in `step'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/scheduler.rb:351:in `start'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/scheduler.rb:349:in `loop'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/scheduler.rb:349:in `start'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/scheduler.rb:348:in `initialize'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/scheduler.rb:348:in `new'
/home/web/project/vendor/gems/rufus-scheduler-2.0.1/lib/rufus/sc/scheduler.rb:348:in `start'
(eval):121
(eval):25:in `start'
(eval):100
./script/runner:3:in `eval'
/home/web/project/vendor/rails/railties/lib/commands/runner.rb:45
./script/runner:3:in `require'
./script/runner:3
Exception in UtilityHelper.self.safe_send_email FATAL   C53300  Msorry, too many clients already        Fproc.c L276    RInitProcess
Exception in UtilityHelper.self.safe_send_email FATAL   C53300  Msorry, too many clients already        Fproc.c L276    RInitProcess
Exception in UtilityHelper.self.safe_send_email FATAL   C53300  Msorry, too many clients already        Fproc.c L276    RInitProcess

Raise The Connection Limit In PostgreSQL:

The exceptions above were thrown from a daily report that is run at night in what should be a low traffic time. To get past the above error and work towards finding final resolution I initially raised the maximum allowed conenctions by PostgreSQL from the default of 100 to 1000. To do this modify the postgresql.conf file typically located in the /var/lib/pgsql/data directory. The new line would look similar to the below.

Modify PostgreSQL max_connections Variable:

max_connections = 100                   # (change requires restart)

As noted in the configuration file you are required to restart PostgreSQL ater modifying max_connections so issue the below command to restart postgresql from a Linux shell. The below example is from CentOS Linux however should be similar on most distrobution.

Restart PostgreSQL On CentOS Linux:

[root@dev ~]# /etc/init.d/postgresql restart
Stopping postgresql service: [  OK  ]
Starting postgresql service: [  OK  ]

Once restarted I set the report causing the issues to run and sure enough it ran without issue this time. While the report was running I watched the PostgreSQL logs by using the below command.

Analyze PostgreSQL Logs In Real Time:

[root@dev ~]# tail -f /var/lib/pgsql/data/pg_log/postgresql-Tue.log
2010-02-02 12:09:31.908 CSTLOG:  statement: INSERT INTO "mgmt_queries" ("cycle_num", "device_id", "poll_rate", "timestamp_server", "mgmt_cmd_id", "timestamp_device") VALUES(2, 221, 30, '2010-02-02 18:09:31.907087', NULL, '2010-02-02 18:11:45.000000')
2010-02-02 12:09:31.909 CSTLOG:  duration: 0.528 ms
2010-02-02 12:09:31.909 CSTLOG:  statement: SELECT currval('mgmt_queries_id_seq')
2010-02-02 12:09:31.909 CSTLOG:  duration: 0.079 ms
2010-02-02 12:09:31.909 CSTLOG:  statement: COMMIT
2010-02-02 12:09:31.910 CSTLOG:  duration: 0.424 ms
2010-02-02 12:09:31.910 CSTLOG:  statement: SELECT * FROM "device_latest_queries"     WHERE ("device_latest_queries"."id" = 221)
2010-02-02 12:09:31.910 CSTLOG:  duration: 0.207 ms
2010-02-02 12:09:31.911 CSTLOG:  statement: SELECT * FROM "mgmt_cmds"     WHERE (pending_on_ack = true and originator = 'server' and pending = true and device_id = 221)  ORDER BY timestamp_initiated DESC LIMIT 1
2010-02-02 12:09:31.911 CSTLOG:  duration: 0.365 ms

Please note that your log files are more than likely named something different and will depend on what you have configured in postgresql.conf. The main thing I would suggest you look out for in this scenario is the “duration” lines which describe, in milliseconds, how long a query takes to run. In doing this I noticed several queries from the report code that were taking seconds to run which should never be the case if possible. Once we optimized these queries and I set max_connections back to the default of 100 things started working properly again.

PostgreSQL 8.4 Official Documentation - Volume I. The SQL Language (Paperback)

By (author): The PostgreSQL Global Development Group


List Price: $34.99 USD
New From: $30.40 USD In Stock
Used from: $9.94 USD In Stock


List Price: $36.99 USD
New From: $36.24 USD In Stock
Used from: $11.80 USD In Stock

Share