[OTDev] OpenAM performance

Nina Jeliazkova jeliazkova.nina at gmail.com
Tue Jul 5 16:00:06 CEST 2011


Hi All,

On 5 July 2011 16:09, Vedrin Jeliazkov <vedrin.jeliazkov at gmail.com> wrote:

> Hi again,
>
> On 4 July 2011 11:45, Vedrin Jeliazkov <vedrin.jeliazkov at gmail.com> wrote:
>
> > So basically, for holding circa 20K policies you need > 2 GB RAM just
> > for the OpenDJ server.
>
> As the number of policies in our test setup grows, this rough
> estimation is confirmed:
>
> 10000 policies require about 1 GB of RAM allocated to the OpenDJ service
>
> We have now more than 41000 policies registered and OpenDJ's JVM
> instance has consumed all the allocated heap space (4GB). In order to
> proceed further with out tests, I've increased the allocated RAM for
> this purpose to 6GB.
>
> Some interesting questions pop in my mind:
>
> 1) do we have a rough estimation of the number of policies we would
> like to be able to support (at least at this stage)?
>

Currently at http://apps.ideaconsult.net:8080/ambit2/  there are roughly
37000 datasets and ~470 models.  This includes all intermediate datasets and
models, created (and not deleted yet) by all partners and third party tools,
who have been using the service since last year.

This is already three times more than the number of existing policies in
OpenTox production AA, and the performance of the production AA server is a
showstopper to hook our production services into the AA infrastructure,
unless

a) we consider some of the resources obsolete and perform a clean up;
b) we bring up our own instance of AA, which can handle this amount of
resources;
c) we find out/develop more efficient approach to policy management.


>
> 2) what is the upper limit of the allocated RAM in a virtual appliance
> in the cloud?
>

I guess it depends on the contracts.  Is it possible to increase the
allocated resources for current OpenTox production instances, which are
cloud hosted, and how much?

Regarding the cloud/anti cloud discussion, if not going into discussing
"trust" and following trendy buzz words, the real question is - does an
available cloud solution pay off, i.e. will a cloud solution with guaranteed
amount of resources available (network access, memory, cpu, admin support
etc.) cost less than the same service, running on bare hardware. I guess the
answer (and the cost distribution) might be different in different parts of
the world.

Finally, an example. The Amazon free tie
<http://aws.amazon.com/free/>r, announced last
week, offers "750 hours of Amazon EC2 <http://aws.amazon.com/ec2> Linux
Micro Instance usage (613 MB of memory and 32-bit and 64-bit platform
support)".  This amount of memory is pretty modest for today computers and
software. What would be the cost if the requirements are a bit higher?  How
much memory have the desktops or the laptops everybody is working on? Less
or equals to 613 MB?

Why it is considered acceptable, that a service, which is supposed to be
accessed concurrently by many users , to be allocated less resources than an
average laptop these days?

Regards,
Nina



More information about the Development mailing list