[ome-devel] OMERO QA submisssion #9383

Josh Moore josh at glencoesoftware.com
Fri Jul 4 14:08:25 BST 2014


Hi
 
On Jul 3, 2014, at 2:57 PM, S Simard wrote:
> On 07/03/2014 01:29 PM, Josh Moore wrote:
>> 
>> See:  https://www.openmicroscopy.org/qa2/qa/feedback/9383
>> 
>> On Jul 2, 2014, at 11:07 AM, Kenneth Gillen wrote:
>>> On 01/07/2014 18:16, "S. Simard" <sebastien.simard at pasteur.fr> wrote:
>>>> 
>>>> The import issue looks partly similar to [trac 12012] - lsof gives
>>>> around 3500 open files after the OverUsageException is reported, but
>>>> they only seem to get cleared on a server restart.
>> https://trac.openmicroscopy.org.uk/ome/ticket/12012 points to
>> https://github.com/openmicroscopy/openmicroscopy/pull/2207
>> which is currently only on develop.
> 
> I think it made it into the 5.0.2 release via PR #2324, too.

Hmmmm.... agreed, but then there's another issue of close() not being called on resources. I'll see if I can reproduce.


>>>> Trying to use the CLI as an alternative to perform parallel imports hits
>>>> [trac 11096].
>> 
>> https://trac.openmicroscopy.org.uk/ome/ticket/11096 occurs with imports
>> from one terminal, or multiple?
> 
> That's from multiple terminals.

Understood. You may have seen Mark's initial PR for handling this case. Should be in 5.0.3.


>>>> I am currently running some more tests with Insight and the following
>>>> server-side changes:
>>>> - increase the "nofile" limit for the "omero" user from 4096 to 8192
>>>> (soft) and 12288 (hard)
>>>> - omero config set omero.throttling.servants_per_session 20000
>>>> If you are aware of caveats with this, or sensible alternatives, I would
>>>> be interested to hear them.
>> The only caveat is that until 12012 is fixed, you're consuming ever more
>> server resources by raising this limit. That may be worth it to you, but
>> you're essentially only delaying the problem.
> 
> Agreed, this is a mostly just a workaround.
> Our last round of tests with a 5.0.2 importer (and presumably 12012 included) does show that the number of open files on the server keep increasing, until a restart is needed - but unlike the issue reported in ticket 11096, the leftover file handles are predominantly the TIFs rather than the import logs.
> 
> As an aside, the 2000 files upload limit in the GUI seems applicable per importer tab, so I guess multiple simultaneous tabs could effectively bypass the threshold ;)

That's certainly interesting.


>>>> WRT the delete issue, it seems reproducible locally with some dummy data:
>>>> - mkdir test-dataset && cd test-dataset
>>>> - for i in `seq 0 1600`; do touch "$i.fake"; done
>>>> - import the test-dataset folder into OMERO
>>>> - then using Insight or web, try and delete the data:
>>>>  * first select a bunch of images and delete them, then delete the
>>>> dataset: works fine
>>>>  * select and delete the dataset directly:
>>>>    i) just a note: fails with a Blitz OOM on a stock 5.0.2 server zip
>>>>    ii) with 2048M heap allocated to Blitz: the process eventually
>>>> fails with a "org.postgresql.util.PSQLException: ERROR: out of shared
>>>> memory  Hint: You might need to increase max_locks_per_transaction." on
>>>> postgres 8.4 (I haven't tried upping this setting, but I'm not sure it
>>>> would be the right thing to do either)
>> Increasing the memory available to OMERO will help, but once you run out of
>> memory in postgresql, you'll have to bump the value as suggested. Yanling
>> recently had to do the same, and said that 2048 was working for the moment.
> 
> Thanks for the pointer - for the record: http://lists.openmicroscopy.org.uk/pipermail/ome-devel/2014-June/002817.html

Np.

Cheers,
~Josh.


More information about the ome-devel mailing list