[OTDev] OpenAM performance

Vedrin Jeliazkov vedrin.jeliazkov at gmail.com
Tue Jul 5 17:11:19 CEST 2011


Hi again Folks,

On 5 July 2011 17:00, Nina Jeliazkova <jeliazkova.nina at gmail.com> wrote:

> Currently at http://apps.ideaconsult.net:8080/ambit2/  there are roughly
> 37000 datasets and ~470 models.  This includes all intermediate datasets and
> models, created (and not deleted yet) by all partners and third party tools,
> who have been using the service since last year.

According to my findings so far, an OpenTox AA service that would
support reasonably well up to 100K policies would require 16 GB of
RAM:

-- 10 GB dedicated to OpenDJ;
-- 2 GB dedicated to OpenAM+Policy services;
-- 1 GB dedicated to MySQL;
-- 3 GB for all the other stuff (OS, services, maintenance, etc).

I think that we should bring up such service instance ASAP, even if we
decide to look for and invest in an entirely different restful AAA
solution in the long run.

>> 2) what is the upper limit of the allocated RAM in a virtual appliance
>> in the cloud?

> I guess it depends on the contracts.

Well, let's take again Amazon as an example:

http://aws.amazon.com/ec2/

An elastic compute cloud instance with characteristics comparable to
IDEA's production server is the "High-Memory Double Extra Large
Instance" (34.2 GB of memory, 13 EC2 Compute Units (4 virtual cores
with 3.25 EC2 Compute Units each), 850 GB of local instance storage,
64-bit platform). Such an "on demand" instance (without any long term
commitments) costs 1 USD per hour (8544 USD per year). Interestingly
this number is very close to the cost of IDEA's server hardware + 1
year of collocation (which happens to be much more expensive in Sofia
than in Munich for instance). For all subsequent years Amazon's
service costs around 4 times more than operating our own hardware in a
collocation centre in Sofia. In addition, Amazon's offer (and
presumably contract) doesn't say anything on the specific
characteristics of the actual hardware being used and this can have a
lot of nasty consequences regarding the overall performance of the
services that are running on it.

> Is it possible to increase the
> allocated resources for current OpenTox production instances, which are
> cloud hosted, and how much?

Certainly, provided there's sufficient budget for this. In fact,
running some of the OT services (including the OT web site and LDAP
server) on severely under provisioned virtual appliances badly affects
the overall usability of the entire framework and should be remedied
ASAP in my opinion.

> Regarding the cloud/anti cloud discussion, if not going into discussing
> "trust" and following trendy buzz words, the real question is - does an
> available cloud solution pay off, i.e. will a cloud solution with guaranteed
> amount of resources available (network access, memory, cpu, admin support
> etc.) cost less than the same service, running on bare hardware. I guess the
> answer (and the cost distribution) might be different in different parts of
> the world.

See above. The short answer is no -- it costs more, at least in the
market segment we're looking for. The real competition happens to be
in the low segment of the market . You could get free or very cheap
services in this segment, but as soon as your services requirements
grow, you're going to pay a lot more, than for an in-house solution
and this is a perfectly normal situation. After all, Amazon & co have
to pay somehow the associated costs for the free and low-cost services
they're offering and also benefit form running such a business, don't
they?

> Finally, an example. The Amazon free tie
> <http://aws.amazon.com/free/>r, announced last
> week, offers "750 hours of Amazon EC2 <http://aws.amazon.com/ec2> Linux
> Micro Instance usage (613 MB of memory and 32-bit and 64-bit platform
> support)".  This amount of memory is pretty modest for today computers and
> software. What would be the cost if the requirements are a bit higher?  How
> much memory have the desktops or the laptops everybody is working on? Less
> or equals to 613 MB?

The answer is that this is simply a form of fishing. In some specific
cases you could take advantage of this fishing scheme, especially if
you're smart and small, but unfortunately our AA solution doesn't seem
to belong to these categories.

> Why it is considered acceptable, that a service, which is supposed to be
> accessed concurrently by many users , to be allocated less resources than an
> average laptop these days?

No, it is not considered acceptable and you're expected to pay (a lot)
when you realise this.

Kind regards,
Vedrin

PS1: Besides all the arguments on "virtual vs physical" that were
brought up so far, there are some additional important things to
consider:

-- what is the maximum possible duration of a contract with a virtual
service supplier? (consider what is the probability that Amazon would
continue offering its services in the long term)

-- what are the guarantees that the appliances offered would be able
to run your code in the future?

PS2: I don't have strong feelings about virtual vs physical, but I do
have a very strong opinion against using severely under provisioned
hardware (virtual or not) for production services.



More information about the Development mailing list