nginx - HTTP connection pools to share among processes -


Where I work our main web application is served with nginx + uwsgi + Django on the given product box There are 80 running UVSwie worker procedures. Our Django app regularly requests for Amazon S3, but, if each of those 80 workers has to use their own HTTP connection for such requests, then they (relatively few) keep the HTTP. -Alive is not enough to take advantage of Amazon's server, therefore, we often have to pay a rebound penalty after connection with Amazon.

What should I do if a proxy service is running on the same box which can "focus" the S3 connection, at least those 80 processes in a small pool of the HTPP connection Will use enough to use, they will be kept alive. The Django app will connect to the proxy, and the proxy will use the pool of their own live connections to forward S3's requests. I see that it is possible to use NGNX as a proxy, but it is not clear to me how it can happen if it can take advantage of the connection of the way it is connected in my mind. An ideal solution would be good in auto-scaling so that a UVswi worker would never have to wait for a proxy for a connection, but the connection should be reversed because keeping the connections as "hot" in load droplets Probably (except maybe 1 or 2 to handle uptics).

I have participated in other forwarded proxies such as Squid, but these products have been used to cater to the more traditional caching proxy role used for example. ISPs with many unequal remote remote clients.

Does anyone know of the current solution for this type of problem? thanks a lot!


Comments

Popular posts from this blog

import - Python ImportError: No module named wmi -

Editing Python Class in Shell and SQLAlchemy -

c# - MySQL Parameterized Select Query joining tables issue -