[OTDev] OpenTox SWD

Barry Hardy barry.hardy at douglasconnect.com
Mon Dec 21 14:25:06 CET 2009


Dera Vedrin:

Thanks for a well written and informative email which I encourage all to 
read and think about.  I am changing the subject title as the subject is 
not really TUM services!  It is at this point time to reflect on our 
current lessons learned from the first stage of OpenTox, to acknowledge 
our successes, to acknowledge and understand our weaknesses, to see what 
we have to do in the times ahead, and to put in place whatever changes 
are needed to do that.

We have been successful in "phase 1" (lets call that the first 15 months 
of project) in creating an initial design and API and carrying out 
exploratory R&D and service implementation.  We have not always been so 
successful on meeting deadlines, keeping to structure and tasks, 
communicating sufficiently, or collaborating effectively enough when it 
comes to implementation.  Our next phase (let's call that Phase 2 
running through the first 24 months of project) needs greater attention 
to implemention and needs us to adapt our approaches so we are successful.

Some suggestions I put out here for our approach in phase 2 to be 
successful:
a. we need to introduce more structure with regards to development 
roles, sub-groups and assignments
b. the API development needs to stablised better against implementation 
- as much as possible the current API is stable during development 
iteration, while the next version can be worked on in parallel.
c. we document more regularly and in a less fragmented way through the 
website our development work in such teams and roles so awareness is 
higher and progress or issues clearer and more quickly perceived
d. I introduce an overall coordination role on requirements and 
development planning who will pay attention to this aspect on a regular 
basis (initially working closely with me but with the idea of taking 
over much of this role during 2010 especially "daily operations").  This 
role would also include interacting with and including new development 
groups and links with secondary projects in time.
e. In addition to the independent service development, we should now 
organise better around use cases as agreed in Rome, and document our 
planning, tasks and progress in that way through the web site for 
iteration 3 and beyond.  Developers and groups should ideally document 
progress and issues on a regular/daily basis. 
f. from iteration 3 we introduce weekly development planning meetings 
where progress from previous week is reported, plans for next week 
agreed on, and issues discussed and addressed
g. for iterations 3 and 4, all need to focus the significant majority of 
their work towards success of the major milestone of prototype release 
for end of February.  (We cannot take an iteration 2 approach.)
h. none of the above structure should stop innovative, exploratory work 
happening also - so it can be agile and trying new things out etc.

The above is not an exhaustive list but cover some of the main points I 
can think of right now.  Come prepared to discuss more detail tomorrow 
including iteration 3 plan in more detail.

Barry

Vedrin Jeliazkov wrote:
> Dear Barry,
>
> Thanks for sharing your concerns on the OT-Dev list, hopefully this
> would stimulate some additional input (from a different perspective)
> on the issues you’ve raised. I’ll try to briefly summarize some
> important points, which might have not been discussed so far in this
> list in an attempt to provide some context for the discussion that
> might follow.
>
> 1) After extensive discussions, at the beginning of 2009 we have
> decided to use an agile software development approach. A nice short
> introduction to this concept is available at:
>
> http://en.wikipedia.org/wiki/Agile_software_development
>
> Here are some quotes, which in my opinion are particularly relevant to
> the issues you’ve raised:
>
> a. Agile methods break tasks into small increments with minimal
> planning, and do not directly involve long-term planning. Iterations
> are short time frames ("timeboxes") that typically last from one to
> four weeks. Each iteration involves a team working through a full
> software development cycle including planning, requirements analysis,
> design, coding, unit testing, and acceptance testing when a working
> product is demonstrated to stakeholders… An iteration may not add
> enough functionality to warrant a market release, but the goal is to
> have an available release (with minimal bugs) at the end of each
> iteration. Multiple iterations may be required to release a product or
> new features.
>
> b. Timeboxes are used as a form of risk management for tasks that
> easily run over their deadlines. The end date is set in stone and may
> not be changed. If the team exceeds the date, the work is considered a
> failure and is cancelled or rescheduled. Some timebox methods allow
> the team to adjust the scope of the task in order to meet the
> deadline.
>
> c. Agile methods emphasize face-to-face communication over written
> documents when the team is all in the same location. When a team works
> in different locations, they maintain daily contact through
> videoconferencing, voice, e-mail, etc.
>
> d. No matter what development disciplines are required, each agile
> team will contain a customer representative. This person is appointed
> by stakeholders to act on their behalf and makes a personal commitment
> to being available for developers to answer mid-iteration
> problem-domain questions. At the end of each iteration, stakeholders
> and the customer representative review progress and re-evaluate
> priorities with a view to optimizing the return on investment and
> ensuring alignment with customer needs and company goals.
>
> e. Agile emphasizes working software as the primary measure of progress.
>
> As you have already stressed, we have repeatedly failed to meet some
> of the timebox limits we have previously agreed upon. In my opinion
> the reasons for these failures are as follows:
>
> a. Some partners have failed to commit the planned resources for
> software development and produce visible results (working software);
>
> b. Some partners are implementing wider subsets of the API, requiring
> more time/resources to get it done;
>
> c. Several iterations have been merged/joined together in an informal way;
>
> d. Planned tasks have been too complex/ambitious for a given timebox.
>
> 2) Our main goal up to now in the first two iterations has been the
> design and implementation of the OpenTox API, which has proven to be a
> harder task than initially foreseen. Two official API versions have
> been released and several intermediate versions of (parts of) the API
> have been circulating around. A lot of software development against
> these versions has been done, allowing better understanding of
> potential API problems and providing guidance for possible fixes. The
> existence of several independent implementations of (parts of) the
> API, carried out by different project partners, is actually a
> state-of-the-art design and engineering practice and allows to avoid
> as much as possible implementation-specific pitfalls. Therefore, we
> should even encourage the development of additional independent
> implementations of (parts of) the OpenTox API (rather than considering
> this as some kind of a problem), because this would be the ultimate
> proof of its usability, correctness, completeness and platform
> independence. In fact, we have already seen some interest for this
> expressed on the OT-Dev list.
>
> 3) Interoperability between web services should not be an issue,
> provided that the webservices are API compliant and the API is well
> designed, mature and stable. You’re right that the ultimate test for
> this interoperability would be the successful implementation of
> end-user oriented tools, implementing selected use cases and making
> use of several relevant OpenTox API compliant independently deployed
> webservices, which provide complementary functionality (e.g. find some
> interesting dataset at service A, calculate some descriptors for this
> dataset at service B, create a model for this dastaset/descriptors at
> service C, validate the resulting model at service D, apply the model
> to some other dataset at service C, visualize the results in a
> suitable GUI). Obviously, such end-user oriented tool has some
> pre-requisites, which are:
>
> a. Well designed, mature and stable OpenTox API (we’re almost there);
>
> b. A set of webservices, implementing relevant parts of the OpenTox
> API (we’re almost there);
>
> c. Professional GUI design (e.g. style sheets and graphical design for
> a web-based application);
>
> d. Integration of the end-user tool with the OpenTox (or any other) web site.
>
> We believe and have already reached an agreement that the next
> iteration (to be run between Dec 22 and Jan 29) should be fully
> devoted to the design, implementation and testing of the FastTox and
> ToxModel tools prototypes. I would also like to emphasize that these
> tools would be really only PROTOTYPES, until we get authentication and
> authorization (AA) integrated both in the API and the corresponding
> webservices. We have intentionally left AA out of the prototypes to be
> delivered by the end of Feb 2010. In particular, this means that
> FastTox and ToxModel prototypes should be considered as playgrounds,
> allowing everyone to make use of the existing OpenTox webservices
> without any guarantees on data availability, integrity and
> confidentiality for the time being.
>
> As you might have guessed already the main risks for Iteration 3 would
> be linked to the above mentioned points 3a) and 3b) and consist in the
> fact that:
>
> a. the OpenTox API might need some further (hopefully small) refinements;
>
> b. FastTox and ToxModel would be fully dependent on a set of relevant
> OpenTox services, implementing the latest API and running at
> production level (stable location, reasonable server resources, etc).
>
> As you can see, we’re entering iteration 3 with some important
> pre-conditions not entirely in place or even missing and this could
> cause further delays. Perhaps there’s some hidden potential in the
> levels of partner commitments…
>
> Last but not least, I would like to emphasize that the OpenTox
> framework can be used by different categories of users:
>
> a. Software developers, who develop webservices and tools, interacting
> through the OpenTox API with other OpenTox compliant webservices
> (either developed in the project or by 3rd parties);
>
> b. 3rd parties, willing to install and run OpenTox webservices;
>
> c. End-users, accessing the OpenTox framework through various use
> case-oriented tools.
>
> The first two development iterations have addressed mainly the (a) and
> (b) categories of users for obvious reasons (API and services are
> necessary pre-conditions for end-user tools). Nevertheless we have
> also presented a (very) initial FastTox implementation at the annual
> meeting at Rome in September, even though this wasn’t planned in the
> corresponding development iteration.
>
> Well, this is it – a rather long mail which hopefully addresses some
> of the raised issues. Many thanks to those of you who have read it up
> to here ;-) Of course -- comments and sugestions are most welcome.
>
> Best regards,
> Vedrin
> _______________________________________________
> Development mailing list
> Development at opentox.org
> http://www.opentox.org/mailman/listinfo/development
>
>   



More information about the Development mailing list