Click here to Skip to main content
15,889,216 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
Some questions about scalability and node + Nginx . Im kind new in the whole “enviroment set up”. My architecture: The company has an app(android/playstore/site) and a dashboard for clientes (web),

I have a droplet (Ubuntu 20.04) on Digital Ocean running the APIs for those apps: The configuration is:
4 Intel Cpus and 8Gb (Basic Shared Cpu)
There are 2 instances of the API for the dashboard and 2 instances of the API for the app.
Its running behind Nginx .
On Nginx I have made load balancing, its working.
upstream apiv2.xxx.xxxx{
  server localhost:5102;
  server localhost:5103;
}

upstream apiv2-dash.xxx.xxxxxx{
  server localhost:5100;
  server localhost:5101;
}

I have also set on Nginxconf file

worker_connections 65535;
worker_rlimit_nofile 65535;
Also the ulimit to 65535 with this guide https://medium.com/@muhammadtriwibowo/set-permanently-ulimit-n-open-files-in-ubuntu-4d61064429a

Now, i have been trying to make some stress tests on the server for learning purposes, maybe in the future it may come in hand. As for now, there are no problems happening.
The Url that im testing returns json object from an static array that is in memory (the array is a list that is updated from time to time with a routine that pulls it from the database). So for this scenario, the database is irrelevant.

how im testing it, running this:
JavaScript
teste();

export function teste() {

    let i = 0;
    setInterval(() => {
        for (let index = 0; index < 100; index++) {
            requestsApiProdutos(i++);
            // requests2();
        }
    }, 1000);

}


function requestsApiProdutos(i) {
    let start = moment();
    Cloud.get("produtos/homev2", {}, (res, error) => {
        let fim = moment();
        if (res) {
            console.log(i + " - " + start.format("DD/MM/YYYY HH:mm:ss:SSS") + " / " + fim.format("DD/MM/YYYY HH:mm:ss:SSS"), "recebeu");
        } else {
            console.log(i + " - " + start.format("DD/MM/YYYY HH:mm:ss:SSS") + " / " + fim.format("DD/MM/YYYY HH:mm:ss:SSS"), error);
        }
    });

}

That should be 100 requests per second, as far i understand.

After almost a minute, a start getting connection timeout on some requests.
Image Logs of timeout
http://prntscr.com/117ilw9
There is nothing on Nginx logs.
If I increase the requests, let’s say yo 150 or 200, earlier they appear and so forth.
The Cpu cores never go higher than 35%, im using htop for monitoring. memory is also ok, low usage.

How does the cpu utilization is related to it? I would understand the timeouts if at least 2 cores were close to 100%.

Are those timeouts expected within these configurations?
In case of no,
1 - what else should I do? Configurations and etc
In case of yes,
1 - Have more cpu cores with more instances in the same machine will make the request per second higher?
2 - When should i use another droplet for better scalability?
3 - If i use another droplet, and i add here:
{
upstream apiv2.xxx.xxxx{
  server localhost:5102;
  server localhost:5103;
  server new droplet1
  server new droplet2
  server new droplet3
...etc
}

how does the worker_connections/worker_rlimit_nofile limit works here?
lets say that with my two localhost i hit the limit of 65535 connections(unlikely, but just hypothetically).
Would adding new droplets in the upstream will solve the problem? or what should be done if the limit is hitted?

If possible, i would like the answers for both “yes and no” cases. Thank you for your time.

What I have tried:

I tried searching google for an direct anwser but could not find it
Posted

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900