Interpreting server statistics
When you add a service to the server, you set some initial values for its configuration. As clients begin accessing the service, you can monitor its performance by examining its statistics. You can review statistics for your entire GIS server as well as each individual service. You can examine how many requests are processed per unit of time, what the average wait time is for a client, and how many requests timed out and didn't get a response from the server.
How to display server statistics
Do the following in ArcCatalog to access statistics for the GIS server as a whole and for individual services:
Displaying statistics for the GIS server
- In the Catalog tree, expand the GIS Servers node.
- Right-click the name of your GIS server and click Server Properties.
- Click the Statistics tab.
- Click Show Statistics.
Displaying statistics for a particular service configuration
- Right-click the name of the GIS server in the Catalog tree that contains the service for which you want to obtain statistics and click Server Properties.
- Click the Statistics tab.
- Click the Services drop-down arrow and click the particular service you want statistics for.
- Click Show Statistics.
Using statistics to make decisions
Statistics can help you proactively monitor your server and its service configurations. A careful analysis of server statistics may help you catch a potential problem before it affects a large number of your server's clients. The following examples suggest actions you can take to remedy troubling statistics.
High usage time or too many usage time-outs
Usage time-outs occur when a client holds on to a service beyond the maximum allowable usage time. This maximum usage time is a property of the service, so you can change it if necessary. The default value is 600 seconds.
If a service is experiencing too many usage time-outs, it may mean that the service is consistently having a problem completing a certain task. If this is the case, check your service and its associated data and ensure that these are configured correctly. If the service is working fine, you might want to increase the maximum allowable usage time for the service.
To keep usage time down, ensure that your applications are designed to make efficient use of service pooling models and service instances. Developers should ensure that their code releases unused server contexts as soon as possible so as to make them available to other clients.
You should avoid using a nonpooled service when a pooled service will suffice. Nonpooled services should only be used in stateful applications, such as those that are used for editing versioned data.
You can also cut usage time for map and globe services by creating caches and following best practices when authoring your map. When you use caches, the service may not even need to be accessed after the initial request if the client can get the cache tiles directly from the Web server. If you are not using a cache, one important tip is to use simple, scale-dependent renderers for features and labels. This cuts drawing time, thereby lowering service usage time.
High wait time or too many wait time-outs
Wait time is a combination of the time a client spends waiting in a queue and the time the server takes to create a service. Wait time is one of the more interesting statistics, because it is a measure of how fast a client application feels.
Wait time is related to usage time, because if usage time is high, clients will potentially have to wait longer to get a service. If the client has to wait too long, a wait time-out will occur.
If the average wait time for a service approaches the service's maximum allowable wait time, you're in danger of experiencing excessive time-outs. If you feel the wait time is reasonable, you can avoid the time-outs by increasing the maximum allowable wait time. If you want to lower the wait time, consider creating more instances of the service.
At some point, increasing the number of instances won't improve performance, because you've reached the capacity of your server machines. To alleviate this issue, you can either reduce the number of instances allocated to other services or add new server object container (SOC) machines to your system.