[OTDev] OpenAM performance

Vedrin Jeliazkov vedrin.jeliazkov at gmail.com
Mon Jul 11 12:11:12 CEST 2011


Hi Nina, Andreas, All,

I've switched again the test setup -- OpenDJ runs currently in a
separate JVM instance and OpenAM is configured to use it both for user
and config data store. I've retrieved the policies owned by nina (1482
policies) and guest (11046 policies) that are defined in the
production AA server and imported them into the test instance. I've
run stress-testing on this setup which included:

-- continuous auth+policy creation+authz+policy removal;
-- continuous auth+policy creation+authz;
-- continuous auth+authz;

Both JVMs are configured with a large heap size (6GB)  in order to
check how much their memory footprint would grow and minimise the
impact of garbage collection on the statistics. After 12 hours of
stress testing the JVM that is running tomcat (ambit2+openam+pol) has
allocated 2.2 GB, while the JVM running OpenDJ -- 1.7 GB.

The main issue we're facing currently is the constant increase of
policy creation/deletion latency:

http://ambit.uni-plovdiv.bg/cgi-bin/smokeping.cgi?target=IDEA-DEV.AA.TestWorkflow-1a

This seems to be triggered by running bulk policy deletions, but has
an impact on policy creation latency as well:

http://ambit.uni-plovdiv.bg/cgi-bin/smokeping.cgi?target=IDEA-DEV.AA.TestWorkflow-2

Authentication and authorisation are not affected by this problem
(their latency stays more or less constant at 20 ms):

http://ambit.uni-plovdiv.bg/cgi-bin/smokeping.cgi?target=IDEA-DEV.AA.TestWorkflow-3a

The observed slow down might have something to do with the
SynchronizedMap which is used by OpenAM to hold the policy
modifications. In a thread dump I've noticed that one thread
performing policy deletion has acquired a lock on this SynchronizedMap
and there were a bunch of other threads waiting for acquiring a lock
on the same data structure. This certainly reduces the overall
throughput of policy modifications, but it is not clear why the
throughput is not constant and the latency grows for a constant
workload.

Kind regards,
Vedrin



More information about the Development mailing list