[OTDev] Techie Table
chung chvng at mail.ntua.grMon Sep 6 17:22:04 CEST 2010
- Previous message: [OTDev] Clustering and Scaling Algorithms
- Next message: [OTDev] Techie Table (Requirements)
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Hello All, I'm sending out some notes/thoughts about the Techie Table @All: I'm thinking of organizing the discussion and presentation in two parts. In the first we are going to show how a developer can consume OpenTox web services from inside a Java application using some tools of ours. This will include some A&A also. Using web clients and parsers, the participants will download and parse OpenTox resources such as compounds, features, datasets and models and inspect them from within their Java application. Exercises will include parsing of datasets and conversion into weka data objects (Instances), training of various models using NTUA, TUM and AMBIT algorithms (where necessary using also the A&A API). So this will familiarize the participants with the direct consumption of OT web services on a more advanced level than using web interfaces and a more programmatic way than using plain curl. This will take about 45 minutes including discussion/questions. In the second part, I'm thinking of having the audience building an OpenTox web service with some guidance. We will design and deploy a clustering algorithm for the sake of getting in touch with web services and having a better insight into the OpenTox API. First we will study the requirements of the OT API and will formulate hypothetical curl commands for how a request would look like (In a top-down approach). Then we will proceed formulating an RDF representation of the algorithm discussing at the same time about the structure of such a document and the information that should be found therein. Jena will be used as an RDF editor/parser, which will be quite useful to developers that want to get involved into any contemporary project (web semantics etc). We will not get into much detail as far as the storing of models is concerned but briefly we'll present the underlying database tables structure. Weka will be used to materialize the XMeans algorithm. The whole effort will boil down to just about 500 lines of source code including the use of various Java libraries such as Restlet, Weka, Jena and DeciBell. This will take about 75' and the rest of the time we will discuss about various challenges a developer can take. I'll be sending you a list of what should a participant have installed by the end of the day. If any participants experience problems installing any of the libraries/tools/programs in the list, we'll do it together on the table. Best regards, Pantelis
- Previous message: [OTDev] Clustering and Scaling Algorithms
- Next message: [OTDev] Techie Table (Requirements)
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Development mailing list