[OTDev] Performance testing and monitoring

Vedrin Jeliazkov vedrin.jeliazkov at gmail.com
Sat Mar 13 00:47:48 CET 2010


Hi Christoph,

On 12 March 2010 18:20, Christoph Helma <helma at in-silico.de> wrote:

> You can restart the tests for our services (with exception of the fminer
> test, that I have mentioned in my previous email). In fact we would need
> some continous testing, to find out what caused the high latencies you
> have mentioned.

OK -- done:

http://ambit.uni-plovdiv.bg/cgi-bin/smokeping.cgi?target=IN-SILICO

> For the destructive tests I will send you requests in the format below
> (probably after easter holidays - I do not want to troubleshoot during
> holidays).

Agreed.

> And from time to time I will have to ask you to stop testing (eg. during
> debugging server configurations - then it is hard to spot errors if
> there are too many entries).

Yes, of course -- just drop me a note.

In addition, our colleague Luchesar has been working since a while on
a SmokePing add-on, which would enable authenticated and authorised
users to configure targets and probes remotely via a web-browser based
interface. We could give it a try in OT when it becomes mature and
stable enough.

> PS Can you also create scripts with sequences of calls (e.g. create
> dataset, create fminer features, wait for task, create lazar model, make
> prediction, check for correct answer, delete model, delete feature
> dataset, delete dataset) - thats how we test internally (guarantees,
> that you have all necessary resources avaliable and it is possible to
> clean up with a post test hook)

Yes, that would be possible, but we would have to write a specific
SmokePing probe for performing such more complex and automated
workflow testing, instead of relying on the generic curl probe. Of
course that would require some additional time for development and
testing of the probe itself (e.g. 2-3 months).

Kind regards,
Vedrin



More information about the Development mailing list