Hi Chris<br><br>thanks for your advice. As suggested by you, I modified the script and created only 4 LogicalChannelI objects linked to the image's channels (I looked into the xml model file and found the many-to-one relationship between channel and logicalchannel), now the execution time takes only 13 minutes.<br>
I attach the script in order to provide an example if anybody needs one.<br><br>Luca<br><br><br><div class="gmail_quote">2009/8/5 Chris Allan <span dir="ltr"><<a href="mailto:callan@blackcat.ca">callan@blackcat.ca</a>></span><br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Hi Luca,<br>
<br>
One big thing is to be very careful with enumerations:<br>
<br>
26 do = om.model.DimensionOrderI()<br>
27 do.setValue(omt.rstring('XYZCT'))<br>
28 pt = om.model.PixelsTypeI()<br>
29 pt.setValue(omt.rstring('uint16'))<br>
<br>
This is creating brand new enumerations for every image inserted, which<br>
will slow any enumeration queries to a crawl and is causing 3000 extra<br>
INSERTs, significant graph inspection overhead, etc. You want to<br>
retrieve the existing enumerations through IQuery and then use an<br>
unloaded version of the object to help you out, for example<br>
(pseudo-code):<br>
<br>
...<br>
dimension_orders = iquery.findAll("DimensionOrder")<br>
xyzct = filter(lambda a: a.value.val == 'XYZCT')[0]<br>
syzct.unload()<br>
...<br>
for image in range(0, 100):<br>
...<br>
p.setDimensionOrder(xyzct)<br>
<br>
The above applies to FormatI, DimensionOrderI and PixelsTypeI. In fact,<br>
you've sort of corrupted your database in a way for the particular user<br>
you've logged in as by adding 1000's of bogus enumerations.<br>
<br>
Give that a try first after deleting your bogus enums and see where you<br>
get to.<br>
<br>
-Chris<br>
<div><div></div><div class="h5"><br>
On Wed, 2009-08-05 at 16:06 +0200, Luca Lianas wrote:<br>
> Sorry, i forgot the attachment...<br>
><br>
><br>
> 2009/8/5 Luca Lianas <<a href="mailto:luca.lianas@crs4.it">luca.lianas@crs4.it</a>><br>
> I belong to the biomedical reserch group at CRS4, a research<br>
> centre in Italy; we are currently using omero in several<br>
> projects and we are running some performance tests during<br>
> these days.<br>
> We noticed that the server has low performances when loading a<br>
> large amount of data (I tried to load the meta-informations<br>
> for 50.000 4-channel images).<br>
> I did a smaller test loading 1000 images using a python script<br>
> and it took 1 minute and 42 seconds to load the data (as said<br>
> before, I only wrote the meta-data of the images into the<br>
> database, the real pixels are stored into a HDFS file system).<br>
> I used the compiled version of Omero downloaded from the<br>
> website and with default configuration. Omero runs on a Linux<br>
> server (Fedora core 11) with a dual opteron processor (248<br>
> model) and 4GB of RAM.<br>
> I'm wondering what is the problem and if there are some hints<br>
> to improve the performances on the server. Any help is<br>
> appreciated.<br>
><br>
> Please see the script I'm using, as per attachment (maybe is<br>
> the script itself my problem).<br>
><br>
> Thanks for you attention<br>
><br>
> Luca<br>
><br>
</div></div>> _______________________________________________<br>
> ome-devel mailing list<br>
> <a href="mailto:ome-devel@lists.openmicroscopy.org.uk">ome-devel@lists.openmicroscopy.org.uk</a><br>
> <a href="http://lists.openmicroscopy.org.uk/mailman/listinfo/ome-devel" target="_blank">http://lists.openmicroscopy.org.uk/mailman/listinfo/ome-devel</a><br>
<br>
</blockquote></div><br>