[OTDev] OpenAM performance

Andreas Maunz andreas at maunz.de
Tue Jul 5 21:15:55 CEST 2011


Hi Nina,

On Tue, 5 Jul 2011 20:02:42 +0300, Nina Jeliazkova
<jeliazkova.nina at gmail.com> wrote:
>> Obviously, the easiest way is b). Concerning Vedrin's experiments, it
>> would be rather straightforward what to do (correct me if I am wrong),
>> namely to give the main proportion of RAM to a dedicated LDAP hosting
>> policy configuration in a separate JAVA VM.
>
> Well, I was considering a)  as the most realistic route :)
> Bringing a second AA instance means either synchronizing the tokens (tokens
> coming from one instance will not be considered valid by another), or
> providing info which service uses which AA instance.  Either ways, this will
> complicate the service interaction.

Yes, I was more thinking about upgrading the common server then setting
up an own IDEA instance, which was a bit unclear from my mail.

Of course, policy cleanup is a good thing. Unfortunately, there are
many open questions- People tend to create resources (and policies) and
never delete them (although I am sure most are never used again).
Thinking about regular "garbage collection", via deleting policies
referring to services that return 401. Still, there may be many "dead
resources" left, if services
a) return something else than 401 despite the resource being actually
gone (e.g. 403 or 500).
b) return something else than 401 with the resource in place, but just
unused (this applies mostly to partners that do not run an RDBS to hold
the data).
Thus, it might be more sensible to start accounting and have a protocol
of access to a resource. This would mean wrapping the OpenAM auth calls,
which leads to additional overhead. But then again, can policies be
deleted just because the resources are unused? After how long? 

>> Currently, the production service has only 2G of memory (and it does
>> not use a dedicated LDAP).
>> Jiffybox offers "CloudLevel 5" (16 GB RAM / 8 CPUs) for 0,25 EUR/h,
>> which is their most powerful appliance.
>> When switching the machine (be the new one physical or not) we should
>> consider starting from scratch, as no upgrade from OpenSSO/OpenAM 9.0 to
>> the current version seems possible.
>
> Perhaps we could have a script, reading the policies from MySQL pol table ,
> and feeding the new OpenAM instance?

Sure, the MySQL part is easy. What seems to be impossible (sadly) is to
transfer the OpenAM policy configuration (see
https://bugster.forgerock.org/jira/browse/OPENAM-464). Without it, the
MySQL information is rather useless.

> P.S. IMHO the memory requirements of OpenDS are rather ridiculous.  After
> all, these 40K policies fit in about 50MB of memory , if one consider them
> as strings, and my calculations are not completely wrong.  Perhaps we are
> still unaware of some configuration magic in OpenAM/ OpenDJ (LDAP)backend.
>  The OpenDJ uses Berkeley DB as a backend, and a (distributed) version of
> BDB  is reported to handle all Google accounts and associated properties ...

I agree: It consumes too much memory, at least when we consider scaling
behaviour. But the slow LDAP write access (for all LDAP implementations
I have met) is also a motivation to look around for other backends.

Best regards
Andreas



More information about the Development mailing list