Skip to content

Concurrent calls to an API using Ruby

December 18, 2015

Got an interesting problem

  1. We need to read a csv file in chunks
  2. Process the chunk of data (transformation)
  3. Once the data is transformed, call the URLs concurrently

This is to avoid the sequential operation. For example, you have a REST API by which you update the address of the employees.

The addresses are going to be different and their details as well (employee id and others)

The bottleneck will be the database which can be handled by throttling the count of concurrent calls.

There are two gems which came handy to solve this problem

  1. Typhoeus
  2. Smarter CSV

Code sample

Reading the csv file in chunks

options = {:chunk_size => 30}
n = SmarterCSV.process(file_name, options) do |chunk|
process(chunk, otherArguments)

Creating the chunk requests

def process(chunk,otherArguments)
urlArray =
chunk.each do |row|
request_body_map = {
:test => row[:test]
urlArray << row[:test].to_s + request_body_map.to_json
callConcurrent(urlArray, url)

Calling the requests Concurrently

def callConcurrent(requestDetails, url)
bodyString = “”
hydra =
requestDetails.each do|website|
request =
method: :put,
body: bodyString
request.on_complete do |response|
p request + ” === ” + response.code.to_s;


There is a significant improvement in processing the data. Identify the load and tweak the chunk count and you will be able to achieve better performance.


No comments yet

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: