CA Asset Portfolio Management r113 Performance testing results summary - CA Technologies
{{search ? 'Close':'Search'}}

CA Asset Portfolio Management r11.3
Performance testing results summary

The 11.3 release of Unicenter Asset Portfolio Management (UAPM) was subjected to a series of stress and performance testing prior to its GA release. The following is a summary of the results that will help an organization in the planning and implementation of UAPM to obtain optimal performance.

Test Case scenario:

The most relevant real world testing scenario is one that simulates a typical user's daily activity. The following steps have been used to evaluate performance of the application for one user and then adding additional users doing the same activities. In the example below, for the single web server configuration, it took 1 user 570 seconds or 9 ½ minutes to perform the 17 steps while 9 other users performed the same 17 steps simultaneously. The "thinktime" values are used to simulate the delay time that a typical user would need to evaluate what was presented to them on the screen. For example, in step 4, the 62 second thinktime represents the time a user would take to scan the multiple pages of the 400 results. This is not the actually elapsed time of the search.

This test was run using Silk Performer testing tool.

1. Log into the application (28 sec. Thinktime)  
2. Run a pre defined, saved, advanced search of Assets that returns all assets without selection criteria (approx. 400 records) (24 sec. Thinktime)  
3. Return to the Home screen (8 sec. Thinktime)  
4. Run a basic search of Assets that returns all assets (approx. 400 records) (62 sec. Thinktime)  
5. Return to the Home screen (31 sec. Thinktime)  
6. Create a new Asset from the asset screen by searching for a model (60 sec. Thinktime)  
7. Delete the new asset (5 sec. Thinktime)  
8. Return to the Home screen (4 sec, Thinktime)  
9. Create a new Asset from the asset screen by searching for a model (61 sec. Thinktime)  
10. Delete the new asset (5 sec. Thinktime)  
11. Return to the Home screen (4 sec. Thinktime)  
12. Create a new Legal Doc (99 sec. Thinktime)  
13. Delete the new legal doc (6 sec. Thinktime)  
14. Return to the Home screen (6 sec. Thinktime)  
15. Run a basic search of models that returns all models (36 sec. Thinktime)  
16. Return to the Home screen (6 sec. Thinktime)  
17. Log out of application (3 sec. Thinktime)  

Hardware and Configurations:

UAPM utilizes the Microsoft .NET framework to perform it web functions. There is a "worker process" named w3wp.exe that does that actual work of processing web requests. This running state of resource usage of this process can be monitored using the Task manager or Performance Monitor utility of the Windows OS.

There were three basic hardware configurations used in the testing:

  1. Single web server, 2 CPU, 1 w3wp
    In this configuration, there is one web server machine running on worker process. This is the most basic configuration.

  2. Web Garden, 1 Server, 2 CPU, 2 w3wp
    In this configuration, there is one web server but there are two worker process allocated to service web requests. This is typically done for each CPU in the machine.

  3. Web Farm, 2 Servers, 2 CPU, 1 w3wp each
    This configuration can also be refereed to as a load balancing configuration. This particular web farm was implemented using the Microsoft Network Load Balancing Manager. In this configuration, there are two physical web servers, each with 2 CPU's and each running one worker process. The users specify one URL and the load is dynamically distributed between the servers controlled by the Load Balancing Manager.

The hardware specifications used in this test were the following:

Web Server and Database Server:
Processor: Dual, 2 GHz Xeon
Memory: 4 GB RAM
Database Version: MS SQL Server 2005

The test is executed for each of the hardware configurations. Each tests starts with adding 1 user per minute for 10 minutes and does this until 10 users are running. It then executes these 10 users for 50 minutes. After 50 minutes, it then adds another 10 users in 1 minute intervals and then runs those 20 users for 50 minutes. It continues this process until it reaches 120 users. The results below, measured in seconds, report on how long on average it took for 1 user to complete the entire 17 steps (from above).

The following are the results:

# of users 1 5 10 20 30 40 50
Hardware Configuration:              
Single web server, 2 CPU, 1 w3wp 570 570 570 750 900 1150 1320
Web Garden, 1 Server, 2 CPU, 2 w3wp 571 571 571 705 970 1140 1852
Web Farm, 2 Servers, 2 CPU, 1 w3wp each 540 542 545 565 570 620 775

# of users 60 70 80 90 100 110 120
Hardware Configuration:              
Single web server, 2 CPU, 1 w3wp 2150 2540 3200 3650 4800 5500 5700
Web Garden, 1 Server, 2 CPU, 2 w3wp 2383 3067 3872 5032 6200 6830 6980
Web Farm, 2 Servers, 2 CPU, 1 w3wp each 1062 1402 2541 3100 3700 4550 5150

Figure 1

As the results show, if an installation will need to support greater then 20-30 users, a web farm would be the recommend configuration. As users grow, additional machines can be added to the web farm to support the growth.

Base on these results you can see on average that if response time is acceptable at 30 users using 1 server, 2 servers will give acceptable response time with 55 users, 3 servers with 80 users. Note that the database server must be large enough to handle the user load so it does not become the bottleneck in the process. Also, factors like network bandwidth, network traffic, the amount of "hops" between client browser machines, web server and database machines are all contributing factors to application performance.

Chat with CA

Just give us some brief information and we'll connect you to the right CA Expert.

Our hours of availability are 8AM - 5PM CST.

All Fields Required

connecting

We're matching your request.

Unfortunately, we can't connect you to an agent. If you are not automatically redirected please click here.

  • {{message.agentProfile.name}} will be helping you today.

    View Profile


  • Transfered to {{message.agentProfile.name}}

    {{message.agentProfile.name}} joined the conversation

    {{message.agentProfile.name}} left the conversation

  • Your chat with {{$storage.chatSession.messages[$index - 1].agentProfile.name}} has ended.
    Thank you for your interest in CA.


    How Did We Do?
    Let us know how we did so that we can maintain a quality experience.

    Take Our Survey >

    Rate Your Chat Experience.

    {{chat.statusMsg}}

agent is typing