[OTDev] Fwd: [OTP] SWDT group meeting today, 23 May, 11:00 CEST

surajit ray mr.surajit.ray at gmail.com
Tue May 24 16:19:38 CEST 2011


Hi,

I am forwarding a mail I had written earlier to Nina. Maybe others
could pool ideas on a good system of rating our models.


---------- Forwarded message ----------
From: surajit ray <mr.surajit.ray at gmail.com>
Date: 24 May 2011 13:58
Subject: Re: [OTP] SWDT group meeting today, 23 May, 11:00 CEST
To: Nina Jeliazkova <jeliazkova.nina at gmail.com>


Hi Nina,

As Roman has said in the meeting there just so many things to
consider. To start a discussion let me give you my view on this.

My idea of a good system to facilitate a "good" decision to test/use a
model would consist of the following considerations ....

a) usage statistics
  i) number of uses
  ii) ratio of successful to unsuccessful runs (computationally)
  iii) ratio of successful to unsuccessful runs (satisfaction of user)
  iv) prev user data - id, email, institution etc ...
b) documentation quality
  i) available in current formats (doc, docx, pdf etc)
  ii) valid annotations and references
  iii) illustrations of expected use cases
  iv) validation statistics included
c) results presented in an easily comprehensible manner
  i) predictions are presented in readable/searchable/taggable manner
(except readable the other two gets covered by the ontology)
  ii) permanent archive location for the results
e) Alignment with the REACH requirements (QPRF,QMRF etc) in producing
the summary
f) statistics on successful use cases with the model (will vary model to model)

The idea of the rating would then be to cover as many points
(mentioned above) as possible. We could maybe have a broad rating in
each section (1-5). We could then decide on a weight for each subpoint
etc. A sort ModelRank (like Google's PageRank - although that is
almost completely driven by links and usages -  unlike our
requirements in Opentox).

Once a subset is reached using above points, the models should then be

1) tested against previously run instances to match expected values
2) tested if possible by independent validators (human / machine )
3) tested by examining the validation statistics provided by the model
[or an associated algorithm to the model - like those made by the
validation algorithms within our API].

Finally the models can be used to make predictions and produce the reports.

Considering that we are automating almost "everything" we should land
up with a 100s if not 1000s of models soon enough. The big challenge
then is to separate those that "work" from those that "dont work".

regards
S


On 24 May 2011 13:03, Nina Jeliazkova <jeliazkova.nina at gmail.com> wrote:
> Hi Surajit,
>
> Could you tell me if you have suggestions what the models rating should
> include - just 1-5 stars, comments, else ?
>
> Thanks,
> Nina
>
> On 23 May 2011 13:36, Roman Affentranger <roman at douglasconnect.com> wrote:
>>
>> Dear All
>>
>> Summary and actions regarding creation of models for REACH-relevant
>> endpoints:
>> - TUM is still working on getting set up for modeling
>> - IBMC is expecting to get models this week
>> - SL has one model creation running (Salmonella Mutagenicity), will create
>> others.
>> - NTUA will re-create the Fish toxicity models with the cleaned dataset
>> - IDEA needs to put Dragon descriptor calculation online before being able
>> to continue on LogP models
>>
>> Other Actions:
>> - Everyone who hasn't yet, plase add your REACH modeling activities to
>>
>> https://docs.google.com/document/d/1NhY0MsQL41La1P9JBkzDVRh1LpA3gTvbub8_KF1FN8s/edit?hl=en_US#,
>> and the results to:
>>
>> https://spreadsheets.google.com/spreadsheet/ccc?key=0Aop1CPY3IHzvdFU0RFcwT2llU21XTHEyeEtva2M0WlE&hl=en_US&pli=1#gid=0
>> - Surajit to contact Martin regarding integration of validation service in
>> SL's model creation
>> - Martin to provide example cURL calls for creation of model comparison
>> report
>> - Nina to set up star-rating of models in ToxPredict
>>
>> Best regards,
>> Roman
>>
>> On Mon, May 23, 2011 at 9:47 AM, Roman Affentranger <
>> roman at douglasconnect.com> wrote:
>>
>> > Dear All
>> >
>> > Agenda for today's SWDT group meeting.
>> >
>> > 1) Status of modeling activities on REACH-relevant endpoints
>> > 2) Continuation of last week's discussion on RDF scalability
>> > 3) Other items
>> >
>> > Best regards,
>> > Roman
>> >
>> > Meeting instructions:
>> > =============================
>> >
>> > 1.  Please join my meeting.
>> > https://www3.gotomeeting.com/join/688800366
>> >
>> > 2.  Use your microphone and speakers (VoIP) - a headset is recommended.
>> > Or, call in using your telephone.
>> > Germany: +49 (0) 898 7806 6469
>> > Italy: +39 0 553 98 95 68
>> > Switzerland: +41 (0) 225 3314 53
>> > United Kingdom: +44 (0) 121 368 0268
>> > United States: +1 (484) 589-1020
>> >
>> > Access Code: 688-800-366
>> > Audio PIN: Shown after joining the meeting
>> >
>> > Meeting Password: opentox321
>> > Meeting ID: 688-800-366
>> >
>> > GoToMeeting®
>> > Online Meetings Made Easy™
>> >
>> > =============================
>> >
>> >
>> >
>>
>>
>> --
>> Dr. Roman Affentranger
>> R&D Activity Coordinator
>> Douglas Connect
>> Baermeggenweg 14
>> 4314 Zeiningen
>> Switzerland
>> _______________________________________________
>> Partners mailing list
>> Partners at opentox.org
>> http://www.opentox.org/mailman/listinfo/partners
>
>



-- 
Surajit Ray
Partner
www.rareindianart.com



More information about the Development mailing list