Live Remote Development with rails
  • That said, we've found it very productive in the early stages of a project when only a single environment is needed and developers are aware of pitfalls. But if you're going to do it, there are a few gotchas which will make it more productive and less painful/likely to go wrong.

  • See also: How to get an Azure cloud Rails environment running via VS Code

  • Optimizing Caching

  • If you stick with the default caching setup, in many cases you won't see real time changes as you make them. A few tweaks to config/production.rb will fix this:

  • # config/production.rb
    
    # We use a flag here as opposed to just setting it to false because we need to let Delayed Jobs enable it to prevent performance issues (see below).
    config.cache_classes = ENV['CACHE_CLASSES'] || false 
    
    # Disable perform_caching
    config.action_controller.perform_caching = false
    
    # This is optional. Showing errors in the browser makes it much easier to debug if things go wrong. Obviously only enable this if you're sure no members of the public will see your code. And remember to disable once you go live.
    config.consider_all_requests_local       = true
  • Working with Delayed Jobs

  • Don't use rake jobs:work to process jobs, as it is not persistent. Instead use the bin/delayed_job start method, expanded on below. This will also require installing the daemons gem.

  • Add an Initializer telling Delayed Job not to reload the app. This spikes CPU usage (at least in my tests).

  • # config/initializers/devise.rb
    
    Delayed::Worker.instance_exec do
      def self.reload_app?
        false
      end
    end
  • Use the following command to process jobs (note setting the CACHE_CLASSES and RAILS_ENV variables ).

  • CACHE_CLASSES=true RAILS_ENV=production bin/delayed_job start
  • We found that on a reasonably sized server ($100/month Azure VM), we were able to comfortably process 8 or so parallel workers without spiking CPU usage too much (although obviously this will depend on the jobs being performed - this was basic API enrichment - fetching and storing data).

  • CACHE_CLASSES=true RAILS_ENV=production bin/delayed_job -n 8 start
  • Only process certain queues

  • CACHE_CLASSES=true RAILS_ENV=production bin/delayed_job --queues=primary,secondary -n 8 start 
  • Managing Logs

  • Sometimes the log file created by rails gets too big and takes up all the space on the server, causing everything to stop. To prevent this we need to add a cron job to periodically delete log files every one hour.

  • First, open up your cron file

  • crontab -e
  • Then, add the following line and save the file

  • 0 * * * * find . -name "production.log" -delete
  • Managing Performance

  • Netdata has a free, one line install performance agent that gives a single dashboard and does email alerts. It's the go to for monitoring.

  • Sometimes (this has only happened me once so far), the CPU usage can shoot up, which is immediately obvious on the Netdata dashboard. In this scenario...

    • See what processes are causing the issue using the top command. This will show you all the processes ordered by how much memory they're using, and the ID of said process.

    • Kill each process that's eating CPU by using kill -9 PID - replace PID with the id of the process

  • Other Useful Scripts

  • Restart Nginx

  • sudo systemctl restart nginx
  • Managing Workers

  • Finding out what workers are running (bin/delayed_job status can be unreliable)

  • ps aux | grep delayed_job
  • Killing Workers (bin/delayed_job stop can be unreliable`).

  • pkill -f "delayed_job"

  • Website Page