1

Update: Rather than spinning up my resque worker through the docker command (to test), I instead killed all my containers with fig kill, added it to the fig configuration, and ran fig up. This worked and all my containers were able to run in harmony. This brings me to another question -- when you run fig up, does it allocate all available memory? Thus preventing you from running other containers, outside of docker?

I'm provisioning a staging server right now using Docker and I'm running into a weird error when trying to start a ruby worker. The server I'm using is a $20 Linode with 2GB RAM and 2 CPU cores.

I'm running nginx, unicorn, mysql, redis, and elasticsearch containers on this VPS using Fig without any problem:

ONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a04cce025794 dockerfile/nginx:latest "nginx" 21 hours ago Up 21 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp arthouse_nginx_1 607139f9ba16 rails:latest "/bin/bash -l -c 'cd 21 hours ago Up 21 hours 3000/tcp, 0.0.0.0:49222->8080/tcp arthouse_app_1 6274f8fe5dc0 dockerfile/elasticsearch:latest "/elasticsearch/bin/ 21 hours ago Up 21 hours 0.0.0.0:49220->9200/tcp, 0.0.0.0:49221->9300/tcp arthouse_elasticsearch_1 55d68c470ce5 dockerfile/redis:latest "redis-server /etc/r 21 hours ago Up 21 hours 0.0.0.0:49219->6379/tcp arthouse_redis_1 50635616ddaa mysql:latest "/entrypoint.sh mysq 21 hours ago Up 21 hours 0.0.0.0:49218->3306/tcp arthouse_database_1 

I'm trying to spin up another Rails container which will run a Resque worker:

docker run -it --link arthouse_elasticsearch_1:elasticsearch --link arthouse_redis_1:redis --link arthouse_database_1:db rails /bin/bash 

When I launch my container and attempt to run Resque, I get a memory allocation error:

root@741f3a425908:~/rails# bundle exec rake environment resque:work VERBOSE=true QUEUE=* Digest::Digest is deprecated; use Digest Amazon Web Services Initialized. Digest::Digest is deprecated; use Digest Digest::Digest is deprecated; use Digest ---- Redis Initialization ---- Connecting to 172.17.0.194 on 6379 in the development environment Redis is initialized. *** DEPRECATION WARNING: Resque::Worker#verbose and #very_verbose are deprecated. Please set Resque.logger.level instead Called from: /root/.rbenv/versions/2.1.4/lib/ruby/gems/2.1.0/gems/resque-1.25.2/lib/resque/worker.rb:746:in `verbose=' /root/.rbenv/versions/2.1.4/lib/ruby/gems/2.1.0/gems/resque-1.25.2/lib/resque/tasks.rb:16:in `block (2 levels) in <top (required)>' /root/.rbenv/versions/2.1.4/lib/ruby/gems/2.1.0/gems/rake-10.3.2/lib/rake/task.rb:240:in `call' /root/.rbenv/versions/2.1.4/lib/ruby/gems/2.1.0/gems/rake-10.3.2/lib/rake/task.rb:240:in `block in execute' /root/.rbenv/versions/2.1.4/lib/ruby/gems/2.1.0/gems/rake-10.3.2/lib/rake/task.rb:235:in `each' /root/.rbenv/versions/2.1.4/lib/ruby/gems/2.1.0/gems/rake-10.3.2/lib/rake/task.rb:235:in `execute' *** Starting worker 741f3a425908:171:* WARNING: This way of doing signal handling is now deprecated. Please see http://hone.heroku.com/resque/2012/08/21/resque-signals.html for more info. *** Registered signals *** Running before_first_fork hooks *** Checking mailer *** Found job on mailer *** got: (Job{mailer} | CustomerMailer | ["customer_registered", 2]) *** resque-1.25.2: Processing mailer since 1416336015 [CustomerMailer] *** Running before_fork hooks with [(Job{mailer} | CustomerMailer | ["customer_registered", 2])] *** Failed to start worker : #<Errno::ENOMEM: Cannot allocate memory - fork(2)> 
1
  • Add more memory to the sever? Commented Nov 18, 2014 at 20:20

2 Answers 2

1

Docker containers are basically just processes, so they can allocate memory in all the usual ways that a process can.

Since you say you are on Linode, perhaps you have the default 256MB swap size? That would give you an overall limit for everything on the system of 2.25GB; maybe that's just not enough?

A command like top will show you how much memory is in use.

1

I've seen similar messages during my experiments, and it turned out that docker container is still around when it is stopped.

Try this:

docker ps -a 

Probably you see some "Exited" entries which still occupy memory. You can remove unneeded containers with docker rm, but I urge you to make sure you really understand the differences between images and containers, and also realize how your containers' state is changed when they are running.

E.g. database image might contain database engine and some basic data files, but after running it (i.e. docker run, when container is created from image) and issuing some INSERT/UPDATE/DELETE SQL statements, your container (not image!) has some new data. Removing container will delete that data too.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.