I continued with performance testing of my Rest services and had another interesting day :).
I was trying to run the Apache benchmarking tool (AB Tool) for total number of 10,000 requests with 1000 concurrent request. My services behaved like real gentleman. But soon after I bumped the 1000 concurrent requests to 1500 concurrent requests, I started getting "socket: Too many open files (24) Error" error.
After little bit of googling realized that I am hitting the linux systems open file limit. I ran "ulimit -a" command to see the list of provided resources available to the shell and to the processes started by it. Then I notices that limit for open files (ulimit -n) is set to 1024. I edited the limit by running "ulimit -n 5000" command and could get over this error.
You might have to add a new line under /etc/profile to make this change permanent . The changes for ulimit hold good only for running shell.
Time for Samba dance again. Soon I wanted to test my services for 100,000 total number of request and guess what I ran into another error. :(. "apr_socket_recv: Connection timed out (110)" . :( . Realized that another OS property has to be modified to accomodate that many request. somaxconn.
To view the existing setting , please look into /proc/sys/net/core/somaxconn file and to change max connection parameter from default 128 to 10240, you need to add “net.core.somaxconn = 10240” line in /etc/sysctl.conf file. Then run “/sbin/sysctl -p /etc/sysctl.conf” command. Now the /proc/sys/net/core/somaxconn file should have our new number. Now the /proc/sys/net/core/somaxconn file should have our new number.
echo "10240" > /proc/sys/net/core/somaxconn command will change the settings temporarily.
Now I am being able to fire 5000 concurrent request for total number of 5000000 request. Happy again.
Hope it helps somebody