Wsgidaemonprocess how many processes




















With this configuration, only one process will be started, with no additional processes ever being created, but that one process would still make use of multiple threads. Because multiple threads are being used, there would be no problem with overlapping requests generated by an AJAX based web page.

With this MPM, multiple worker threads within a child process are used to handle all requests. At no time are additional child processes created, or that one child process shutdown and killed off, except where Apache as a whole is being stopped or restarted. Because there is only one child process, the maximum number of threads used is much greater.

This distinction is to allow for where some form of mapping mechanism might be used to distribute requests across multiple process groups and thus in effect it is still a multiprocess application. The processes are not in any way used to serve static files, or host applications implemented in other languages. That is, when the server experiences additional load, no more daemon processes are created than what is defined.

You should therefore always plan ahead and make sure the number of processes and threads defined is adequate to cope with the expected load.

This is because request handlers can execute within the context of distinct child processes, each with their own set of global data unique to that child process. The consequences of this are that you cannot assume that separate invocations of a request handler will have access to the same global data if that data only resides within the memory of the child process.

If some set of global data must be accessible by all invocations of a handler, that data will need to be stored in a way that it can be accessed from multiple child processes. Such sharing could be achieved by storing the global data within an external database, the filesystem or in shared memory accessible by all child processes.

Since the global data will be accessible from multiple child processes at the same time, there must be adequate locking mechanisms in place to prevent distinct child processes from trying to modify the same data at the same time.

Thus, where Apache is being used to host multiple WSGI applications a process will contain multiple sub interpreters. When Apache is run in a mode whereby there are multiple child processes, each child process will contain sub interpreters for each WSGI application. When a sub interpreter is created for a WSGI application, it would then normally persist for the life of the process. The only exception to this would be where interpreter reloading is enabled, in which case the sub interpreter would be destroyed and recreated when the WSGI application script file has been changed.

For the sub interpreter created for each WSGI application, they will each have their own set of Python modules. In other words, a change to the global data within the context of one sub interpreter will not be seen from the sub interpreter corresponding to a different WSGI application. This will be the case whether or not the sub interpreters are in the same process.

Specifically, the directive indicates that the marked WSGI applications should be run within the context of a common sub interpreter rather than being run in their own sub interpreters. By doing this, each WSGI application will then have access to the same global data.

The only other way of sharing data between sub interpreters within the one child process would be to use an external data store, or a third party C extension module for Python which allows communication or sharing of data between multiple interpreters within the same process.

Where shared data needs to be visible to all application instances, regardless of which child process they execute in, and changes made to the data by one application are immediately available to another, including any executing in another child process, an external data store such as a database or shared memory must be used.

Show original message. Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message. I have a small blog couple hundred hits per day?

But I'll be setting up a larger site 10, hits per day? If there is not guideline, is there a way to know that you need to adjust these?

Thanks, Rob. Graham Dumpleton. What about apache mpm settings? Okay, lets see if I have this right Again, fudge factors are useful. The actual upper bound of how many processes you can run on the machine is dictated by the upper amount of memory each process takes; spool up one process, then run a variety of memory-hungry actions ones that retrieve and process a lot of data, typically against it with a realistic data set if you just use a toy data set for testing, say 50 or rows, then if one of your actions retrieves and manipulates every row in the table it won't be a good measurement for when that table grows to 10, rows to see what the memory usage balloons out to.

You can artificially constrain your per-process memory usage with a script that reaps workers that reach a certain memory usage threshold, at the risk of causing nasty problems if you set that threshold too low.

Once you've got your memory use figure, you deduct some amount of memory for system overhead I like MB myself , deduct a pile more if you've got other processes running on the same machine like a database , and then some more to make sure you don't run out of disk cache space depends on your disk working set size, but again I'd go with no less than MB.

That's the amount of memory that you divide by your per-process memory usage to get the ceiling. If the number of processes you need to service your peak load is greater than the number of processes you can fit on the box, you need more machines or to move the database to another machine, in the simplest case.

There you are, several years of experience scaling websites distilled into one small and simple SF post. I'd like to give some empirical numbers, and "simple content" versus "e-commerce" application comparison. We run several customer websites, most of them mainly content sites or micro sites hosting django CMS, some custom forms, and sometimes Celery for scheduled background tasks.

Here's the configuration we use for each of this kind of sites:. I'm talking about roughly 40 sites on a single server, most of them with their Staging site running in standby. With 2 processes having 15 threads each, by default the sites are well-off, albeit limited in their capability of allocating server resources.

Why this setup is sufficient can be justified with the simple nature of the CMS application: No request is ever expected to take more than a couple of milliseconds to complete. Apache will always stay relaxed, and so will be the CPU load. More complex sites we do are characterized by still computationally inexpensive local operations but external dependencies e. Operations with external requests occupy threads for much longer time, so you need more threads to cater the same number of users compared to a simple CMS site from above.

Even worse, threads are occasionally blocked when an external service can't answer a request immediately, sometimes for a couple of seconds. For those scenarios we have tried to use 6 processes without seeing much difference, and we ended up with 12 seeing an incomparable boost in performance and operational stability:. Some simple load tests with , and parallel users are easily handled by the site staying well responsive while with 2 processes the site is unusable catering 50 users in parallel.

Note that we use a dedicated machine just for a single site here, so we won't steal resources that other sites may need. Using a higher number of processes is a trade-off between allowing Apache to make use of available system resources or not. If you want to keep a stable server system not website!

How high you can go calculates somewhat like outlined in the accepted answer above, and is ultimately constrained by the available CPU power and RAM. Also be sure to understand and monitor your Apache server's open connections. Sign up to join this community.

The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group.

Create a free Team What is Teams? Learn more.



0コメント

  • 1000 / 1000