[OTDev] TUM services

Vedrin Jeliazkov vedrin.jeliazkov at gmail.com
Sat Dec 19 17:39:49 CET 2009


Dear Barry,

Thanks for sharing your concerns on the OT-Dev list, hopefully this
would stimulate some additional input (from a different perspective)
on the issues you’ve raised. I’ll try to briefly summarize some
important points, which might have not been discussed so far in this
list in an attempt to provide some context for the discussion that
might follow.

1) After extensive discussions, at the beginning of 2009 we have
decided to use an agile software development approach. A nice short
introduction to this concept is available at:

http://en.wikipedia.org/wiki/Agile_software_development

Here are some quotes, which in my opinion are particularly relevant to
the issues you’ve raised:

a. Agile methods break tasks into small increments with minimal
planning, and do not directly involve long-term planning. Iterations
are short time frames ("timeboxes") that typically last from one to
four weeks. Each iteration involves a team working through a full
software development cycle including planning, requirements analysis,
design, coding, unit testing, and acceptance testing when a working
product is demonstrated to stakeholders… An iteration may not add
enough functionality to warrant a market release, but the goal is to
have an available release (with minimal bugs) at the end of each
iteration. Multiple iterations may be required to release a product or
new features.

b. Timeboxes are used as a form of risk management for tasks that
easily run over their deadlines. The end date is set in stone and may
not be changed. If the team exceeds the date, the work is considered a
failure and is cancelled or rescheduled. Some timebox methods allow
the team to adjust the scope of the task in order to meet the
deadline.

c. Agile methods emphasize face-to-face communication over written
documents when the team is all in the same location. When a team works
in different locations, they maintain daily contact through
videoconferencing, voice, e-mail, etc.

d. No matter what development disciplines are required, each agile
team will contain a customer representative. This person is appointed
by stakeholders to act on their behalf and makes a personal commitment
to being available for developers to answer mid-iteration
problem-domain questions. At the end of each iteration, stakeholders
and the customer representative review progress and re-evaluate
priorities with a view to optimizing the return on investment and
ensuring alignment with customer needs and company goals.

e. Agile emphasizes working software as the primary measure of progress.

As you have already stressed, we have repeatedly failed to meet some
of the timebox limits we have previously agreed upon. In my opinion
the reasons for these failures are as follows:

a. Some partners have failed to commit the planned resources for
software development and produce visible results (working software);

b. Some partners are implementing wider subsets of the API, requiring
more time/resources to get it done;

c. Several iterations have been merged/joined together in an informal way;

d. Planned tasks have been too complex/ambitious for a given timebox.

2) Our main goal up to now in the first two iterations has been the
design and implementation of the OpenTox API, which has proven to be a
harder task than initially foreseen. Two official API versions have
been released and several intermediate versions of (parts of) the API
have been circulating around. A lot of software development against
these versions has been done, allowing better understanding of
potential API problems and providing guidance for possible fixes. The
existence of several independent implementations of (parts of) the
API, carried out by different project partners, is actually a
state-of-the-art design and engineering practice and allows to avoid
as much as possible implementation-specific pitfalls. Therefore, we
should even encourage the development of additional independent
implementations of (parts of) the OpenTox API (rather than considering
this as some kind of a problem), because this would be the ultimate
proof of its usability, correctness, completeness and platform
independence. In fact, we have already seen some interest for this
expressed on the OT-Dev list.

3) Interoperability between web services should not be an issue,
provided that the webservices are API compliant and the API is well
designed, mature and stable. You’re right that the ultimate test for
this interoperability would be the successful implementation of
end-user oriented tools, implementing selected use cases and making
use of several relevant OpenTox API compliant independently deployed
webservices, which provide complementary functionality (e.g. find some
interesting dataset at service A, calculate some descriptors for this
dataset at service B, create a model for this dastaset/descriptors at
service C, validate the resulting model at service D, apply the model
to some other dataset at service C, visualize the results in a
suitable GUI). Obviously, such end-user oriented tool has some
pre-requisites, which are:

a. Well designed, mature and stable OpenTox API (we’re almost there);

b. A set of webservices, implementing relevant parts of the OpenTox
API (we’re almost there);

c. Professional GUI design (e.g. style sheets and graphical design for
a web-based application);

d. Integration of the end-user tool with the OpenTox (or any other) web site.

We believe and have already reached an agreement that the next
iteration (to be run between Dec 22 and Jan 29) should be fully
devoted to the design, implementation and testing of the FastTox and
ToxModel tools prototypes. I would also like to emphasize that these
tools would be really only PROTOTYPES, until we get authentication and
authorization (AA) integrated both in the API and the corresponding
webservices. We have intentionally left AA out of the prototypes to be
delivered by the end of Feb 2010. In particular, this means that
FastTox and ToxModel prototypes should be considered as playgrounds,
allowing everyone to make use of the existing OpenTox webservices
without any guarantees on data availability, integrity and
confidentiality for the time being.

As you might have guessed already the main risks for Iteration 3 would
be linked to the above mentioned points 3a) and 3b) and consist in the
fact that:

a. the OpenTox API might need some further (hopefully small) refinements;

b. FastTox and ToxModel would be fully dependent on a set of relevant
OpenTox services, implementing the latest API and running at
production level (stable location, reasonable server resources, etc).

As you can see, we’re entering iteration 3 with some important
pre-conditions not entirely in place or even missing and this could
cause further delays. Perhaps there’s some hidden potential in the
levels of partner commitments…

Last but not least, I would like to emphasize that the OpenTox
framework can be used by different categories of users:

a. Software developers, who develop webservices and tools, interacting
through the OpenTox API with other OpenTox compliant webservices
(either developed in the project or by 3rd parties);

b. 3rd parties, willing to install and run OpenTox webservices;

c. End-users, accessing the OpenTox framework through various use
case-oriented tools.

The first two development iterations have addressed mainly the (a) and
(b) categories of users for obvious reasons (API and services are
necessary pre-conditions for end-user tools). Nevertheless we have
also presented a (very) initial FastTox implementation at the annual
meeting at Rome in September, even though this wasn’t planned in the
corresponding development iteration.

Well, this is it – a rather long mail which hopefully addresses some
of the raised issues. Many thanks to those of you who have read it up
to here ;-) Of course -- comments and sugestions are most welcome.

Best regards,
Vedrin



More information about the Development mailing list