[OTDev] OpenAM performance

surajit ray mr.surajit.ray at gmail.com
Tue Jul 5 06:03:56 CEST 2011


Hi,

Just my $0.02 ....

I have brought up in the past the issue of tokens evaporating before a task
is completed and data uploaded. I think its a serious question which as yet
has no answer within the A&A Framework. Also Nina in the reply has said that
this [OpenAM] is a temporary solution rather than a completely thought out
one. I feel investing in technology [hardware wise] which is yet to serve
our purposes completely is not necessary and we could rather do with a
solution which is less costly and hence less of a loss in case we decide to
change the A&A framework later.

As to OpenAM itself, it is no longer supported by Sun and maintained by an
independent Norwegian firm, which as we have now seen, has managed to mess
up the latest release. Businesses with confidential data is not gonna be
inspired by this technology let alone trust it with their IP. Moreover our
complex policy systems although perfectly reasonable to us, is a mystery
area to a new user. And something which is not easily understood will not be
easily trusted.

Also I am amused by the anti-cloud sentiment here. There are vast [and
secure systems] running in the cloud at this moment [ on Google, Rackspace
as well as Amazon and many others]. The cloud computer is just a machine,
and although it is virtual the same security mechanisms apply. Data can be
encrypted and stored and you can always have a workflow where the data never
sits unencrypted within the cloud machine. In fact the cloud machine [at
least on Amazon] can connect to an external machine on a VPN, so the policy
data itself can be stored outside with only the operations running on the
cloud.

And lastly I do not agree with Luchesar about physical security. Although a
nice big computer room with security guards and canines outside conjure up a
rosy picture of a secure environment, the biggest problems and leaks don't
happen  due to people getting physical access to the machine. Rather
carelessly designed code will be the bigger culprit in the long run. Infact
having it on the cloud ensures that no one get physical access.

Amazon also operates something called a cloudfront , which is essentially
virtual machines with low latency to a particular geographical area. Which
is ideal for our situation. Also Amazon's load balancing is automated and at
the least supports a private network within the cloud which would support
perfectly our load balancing strategies.

The biggest advantage of the cloud system of course is instant scale up. You
just have shut sown an instance, increase its memory and CPU and bring it
back up again. I think that in itself will solve half of our problems in the
most cost effective manner.  You can see our code running on the cloud at :
http://50.19.222.138:8080/MaxtoxMCSS
The static IP is also being provided free by Amazon.

On the whole I think having a central policy server is a serious security
hassle. We don't want all the client data compromised for the case of the
failure of the central policy server. A distributed system as I had
suggested earlier is more desirable and easily designed. Such a system would
have specific policy servers for certain geographical or logical groups. If
one is hacked the other groups are still secure.

Also OpenAM has serious inconsistencies within it like numerous names for
the same variable etc , which as Andreas has said in the past is not
possible to solve without hacking the code. A&A should also play a role in
collected and disposing unused resources, which right now is not a
possibility within OpenAM.

Cheers
Surajit






On 4 July 2011 21:35, Luchesar V. ILIEV <luchesar.iliev at gmail.com> wrote:

> On 07/04/2011 15:52, Vedrin Jeliazkov wrote:
> >> Given your below results, the most important
> >> step besides upgrading will be a real powerful LDAP service for
> >> configuration store.
> >
> > Yes, this is simply a must. In fact I'm convinced that as long as we
> > stick to the current rather resource demanding AA solution that we've
> > designed and implemented, we should probably run it on bare hardware,
> > not "in the cloud", in order to ensure satisfying performance. Another
> > important aspect would be fault tolerance (both OpenAM and OpenDJ
> > support load-balancing, failover and federating but this needs to be
> > investigated/tested further). In addition, for such a critical
> > component such as AA it is often required to have multiple servers,
> > running at different physical locations, to ensure proper level of
> > availability.
>
> Hope you don't mind if I add yet another aspect: security. From this
> standpoint, not only it's desirable to avoid virtualization (as the
> added technical complexity means much less control), but it's even
> better to deploy such services on dedicated hardware.
>
> Overall, a serious centralized AA system would require careful planning
> starting from the very physical location where it would be deployed (it
> should, obviously, allow for tight control of who and when has access to
> the hardware). And, as security is by definition a dynamic process,
> never a static condition, that system would need constant attention:
> monitoring, software management (at the very least, patching regularly),
> proactive protection and contingency preparedness.
>
> This, again, all speaks strongly in favour of a dedicated system.
>
> Best regards,
> Luchesar
> _______________________________________________
> Development mailing list
> Development at opentox.org
> http://www.opentox.org/mailman/listinfo/development
>



-- 
Surajit Ray
Partner
www.rareindianart.com



More information about the Development mailing list