[ome-devel] OMERO QA submisssion #9383

S Simard ssimard at pasteur.fr
Thu Jul 3 13:57:41 BST 2014


Hi Josh,

On 07/03/2014 01:29 PM, Josh Moore wrote:
> Moving to ome-devel.
>
> See:  https://www.openmicroscopy.org/qa2/qa/feedback/9383
>
> Caused by: Ice.UnknownUserException
>    unknown = "omero::OverUsageException
>    serverStackTrace = "ome.conditions.OverUsageException: servantsPerSession reached for 59d08ac3-7ffb-46be-ba8d-ede69be27237: 10000
>      at omero.util.ServantHolder.put(ServantHolder.java:161)
>      at omero.cmd.SessionI.registerServant(SessionI.java:589)
>      at omero.cmd.SessionI.submit_async(SessionI.java:203)
>
>
> On Jul 2, 2014, at 11:07 AM, Kenneth Gillen wrote:
>> On 01/07/2014 18:16, "S. Simard" <sebastien.simard at pasteur.fr> wrote:
>>
>>> Hi Kenny,
> Hi Sebastien,
>
>
>>> Thanks for following this up - fyi, here is the current state of things.
>>>
>>> My user attempts to upload large-ish datasets (around 1600 TIF files,
>>> about 1.7 M each) using OMERO.insight.
>>> This happens sequentially, so you typically have two or three such bulk
>>> imports running within Insight/Importer tabs.
>>> The problems that arise are then:
>>> 1 - partial failure to import due to the "servantsPerSession" error you
>>> received
>>> 2 - in order to cleanup the inconsistent dataset, my user attempts to
>>> delete it, and hits another set of errors
>>>
>>>
>>> The import issue looks partly similar to [trac 12012] - lsof gives
>>> around 3500 open files after the OverUsageException is reported, but
>>> they only seem to get cleared on a server restart.
> https://trac.openmicroscopy.org.uk/ome/ticket/12012 points to
> https://github.com/openmicroscopy/openmicroscopy/pull/2207
> which is currently only on develop.

I think it made it into the 5.0.2 release via PR #2324, too.

>>> Trying to use the CLI as an alternative to perform parallel imports hits
>>> [trac 11096].
>
> https://trac.openmicroscopy.org.uk/ome/ticket/11096 occurs with imports
> from one terminal, or multiple?

That's from multiple terminals.

>
>
>>> I am currently running some more tests with Insight and the following
>>> server-side changes:
>>> - increase the "nofile" limit for the "omero" user from 4096 to 8192
>>> (soft) and 12288 (hard)
>>> - omero config set omero.throttling.servants_per_session 20000
>>> If you are aware of caveats with this, or sensible alternatives, I would
>>> be interested to hear them.
> The only caveat is that until 12012 is fixed, you're consuming ever more
> server resources by raising this limit. That may be worth it to you, but
> you're essentially only delaying the problem.

Agreed, this is a mostly just a workaround.
Our last round of tests with a 5.0.2 importer (and presumably 12012 
included) does show that the number of open files on the server keep 
increasing, until a restart is needed - but unlike the issue reported in 
ticket 11096, the leftover file handles are predominantly the TIFs 
rather than the import logs.

As an aside, the 2000 files upload limit in the GUI seems applicable per 
importer tab, so I guess multiple simultaneous tabs could effectively 
bypass the threshold ;)

>
>>> WRT the delete issue, it seems reproducible locally with some dummy data:
>>> - mkdir test-dataset && cd test-dataset
>>> - for i in `seq 0 1600`; do touch "$i.fake"; done
>>> - import the test-dataset folder into OMERO
>>> - then using Insight or web, try and delete the data:
>>>   * first select a bunch of images and delete them, then delete the
>>> dataset: works fine
>>>   * select and delete the dataset directly:
>>>     i) just a note: fails with a Blitz OOM on a stock 5.0.2 server zip
>>>     ii) with 2048M heap allocated to Blitz: the process eventually
>>> fails with a "org.postgresql.util.PSQLException: ERROR: out of shared
>>> memory  Hint: You might need to increase max_locks_per_transaction." on
>>> postgres 8.4 (I haven't tried upping this setting, but I'm not sure it
>>> would be the right thing to do either)
> Increasing the memory available to OMERO will help, but once you run out of
> memory in postgresql, you'll have to bump the value as suggested. Yanling
> recently had to do the same, and said that 2048 was working for the moment.

Thanks for the pointer - for the record: 
http://lists.openmicroscopy.org.uk/pipermail/ome-devel/2014-June/002817.html

>>> Regards,
>>> Sebastien
>>>
>>> [trac 12012] http://trac.openmicroscopy.org.uk/ome/ticket/12012
>>> [trac 11096] http://trac.openmicroscopy.org.uk/ome/ticket/11096
> Cheers,
> ~Josh.

Thanks,
Sebastien


More information about the ome-devel mailing list