[ome-devel] Problem when importing >1000 images

Andrii Iudin andrii at ebi.ac.uk
Fri May 6 11:49:52 BST 2016


Dear Josh,

I have added a logout after to the script after each import call. This 
time more than 2000 entries have been imported, however an error 
happened. Please could you check the attached log? Is this the same 
issue with NFS or something different? Is it possible that using 
sessions might help?

Thank you and best regards,
Andrii

On 02/05/2016 06:44, Josh Moore wrote:
> On Fri, Apr 29, 2016 at 12:41 PM, Andrii Iudin <andrii at ebi.ac.uk> wrote:
>> Dear Josh,
> Hi Andrii,
>
>
>> Thank you for providing the possible solution to our problem. We will test
>> the session usage and get back with the results. Please could you clarify a
>> few things about your propositions?
>>
>> Is it possible to add a wait time somewhere in the code to compensate for
>> the slower NFS locking?
> Cconceivably, but considering the state the serve could possibly be in
> at that point (shutdown, etc) it's difficult to know. One option is to
> put your /OMERO directory on a non-NFS filesystem and then symlink in
> individual directories from NFS. Ultimately, though, this points to an
> issue with the remote fileshare that needs to be looked into.
>
>
>> As far as I can see we do not call
>> bin/omero login
> `bin/omero import` calls `login` if no login is present.
>
>
>> explicitly at this moment. Is it an integral part of the import? There is
>> also BlitzGateway.connect() call before the script goes into the loop over
>> all images.
> Agreed. There are a couple of different logins in play here which
> makes it all a bit complicated. One option would be to get everything
> into the same process with no subprocess calls to `bin/omero import`.
>
>
>> Does this mean then that we should call logout after each import?
> That's probably the easiest thing to test. Longer-term, it'd be better
> to use a session key.
>
>
>> Thank you and best regards,
>> Andrii
> Cheers,
> ~Josh.
>
>
>
>> On 28/04/2016 10:17, Josh Moore wrote:
>>> On Wed, Apr 27, 2016 at 11:40 AM, Andrii Iudin <andrii at ebi.ac.uk> wrote:
>>>> Dear Josh,
>>> Hi Andrii,
>>>
>>>
>>>> Thank you for pointing to the documentation on the remote shares. Those
>>>> .lock files usually appear if we stop the server after one of the
>>>> "crashes".
>>>> When stopping and starting the server during its normal functioning they
>>>> seem to be not created.
>>> It sounds like a race condition. When the server is under pressure,
>>> etc., then there's no time for the slower NFS locking implementation
>>> to do what it should. This is what makes the remote share not behave
>>> as a posix filesystem should. There has been some success with other
>>> versions of NFS and lockd tuning.
>>>
>>>
>>>> The run_command definition is following:
>>>>       def run_command(self, command, logFile=None):
>>> Thanks for the definition. I don't see anything off-hand in your code.
>>> If there's a keep alive bug in the import code itself, you might
>>> trying running a separate process with:
>>>
>>>       bin/omero sessions keepalive
>>>
>>> You can either do that in a console for testing, or via your Python
>>> driver itself. If that fixes the problem, then we can help you
>>> integrate that code into your main script without the need for a
>>> subprocess. Additionally, the session UUID that is created by that
>>> method could be used in all of your import subprocesses which would 1)
>>> protect the use of the password and 2) lower the overhead on the
>>> server.
>>>
>>> (In fact, now that I think of it, if you don't have a call to
>>> `bin/omero logout` anywhere in your code, this may be exactly the
>>> problem that you are running into. Each call to `bin/omero login`
>>> creates a new session which is kept alive for the default session
>>> timeout.)
>>>
>>> Cheers,
>>> ~Josh.
>>>
>>>
>>>
>>>
>>>> Best regards,
>>>> Andrii
>>>>
>>>>
>>>> On 26/04/2016 21:00, Josh Moore wrote:
>>>>> Hi Andrii,
>>>>>
>>>>> On Tue, Apr 26, 2016 at 10:56 AM, Andrii Iudin <andrii at ebi.ac.uk> wrote:
>>>>>> Dear Josh,
>>>>>>
>>>>>> Please find attached the import script. For each EMDB entry it performs
>>>>>> an
>>>>>> import of six images - three sides and their thumbnails.
>>>>> Thanks for this. And where's the definition of `run_command`?
>>>>>
>>>>>
>>>>>> To stop OMERO we use "omero web stop" and then "omero admin stop"
>>>>>> commands.
>>>>>> After this it is necessary to remove
>>>>>> var/OMERO.data/.omero/repository/*/.lock files before starting OMERO
>>>>>> again.
>>>>>> The system is NFS.
>>>>> I'd assume then that disconnections & the .lock files are unrelated.
>>>>> Please see
>>>>>
>>>>> https://www.openmicroscopy.org/site/support/omero5.2/sysadmins/unix/server-binary-repository.html#locking-and-remote-shares
>>>>> regarding using remote shares.
>>>>>
>>>>> Cheers,
>>>>> ~Josh.
>>>>>
>>>>>
>>>>>
>>>>>> Best regards,
>>>>>> Andrii
>>>>>>
>>>>>>
>>>>>> On 25/04/2016 16:21, Josh Moore wrote:
>>>>>>> On Fri, Apr 22, 2016 at 12:41 PM, Andrii Iudin <andrii at ebi.ac.uk>
>>>>>>> wrote:
>>>>>>>> Dear OMERO developers,
>>>>>>>>
>>>>>>>> We are experiencing an issue when importing a large number of images
>>>>>>>> in
>>>>>>>> a
>>>>>>>> single consequent go. This usually happens after importing more than
>>>>>>>> a
>>>>>>>> thousand images. Please see below excerpts from the logs. Increasing
>>>>>>>> a
>>>>>>>> time
>>>>>>>> period between each import seemed to helped a bit, however this issue
>>>>>>>> ultimately happened anyway.
>>>>>>> Is this script available publicly? It would be useful to see how it's
>>>>>>> working.
>>>>>>>
>>>>>>>
>>>>>>>> To get OMERO server working after this happens,
>>>>>>>> it is necessary to stop it, remove .lock files and start the server
>>>>>>>> again.
>>>>>>>> It would be much appreciated if you could point out to a possible way
>>>>>>>> to
>>>>>>>> solve this issue.
>>>>>>> How did you stop OMERO? Is your file system on NFS or another remote
>>>>>>> share?
>>>>>>>
>>>>>>>
>>>>>>>> Thank you and with best regards,
>>>>>>>> Andrii
>>>>>>> Cheers,
>>>>>>> ~Josh.
>>> _______________________________________________
>>> ome-devel mailing list
>>> ome-devel at lists.openmicroscopy.org.uk
>>> http://lists.openmicroscopy.org.uk/mailman/listinfo/ome-devel
>>
>> _______________________________________________
>> ome-devel mailing list
>> ome-devel at lists.openmicroscopy.org.uk
>> http://lists.openmicroscopy.org.uk/mailman/listinfo/ome-devel
> _______________________________________________
> ome-devel mailing list
> ome-devel at lists.openmicroscopy.org.uk
> http://lists.openmicroscopy.org.uk/mailman/listinfo/ome-devel

-------------- next part --------------
Traceback (most recent call last):
  File "/nfs/msd/em/crontabs/prod/empiar3d/importEmpiar3d.py", line 83, in processEntries
    importImageIntoOMERO(e3d, entryName, inputDir, errorDir, extension, deployment)
  File "/nfs/msd/em/crontabs/prod/empiar3d/importEmpiar3d.py", line 59, in importImageIntoOMERO
    e3d.run_command('omero import -s {0} -u {1} -w {2} -d {3} {4} {5} {6} {7} {8} {9}'.format(e3d.omeroHost, e3d.omeroUser, e3d.omeroPassword, e3d.omeroDataset, topImage, frontImage, sideImage, topThumbImage, frontThumbImage, sideThumbImage), f_stdLog)
  File "/nfs/msd/em/crontabs/prod/empiar3d/empiar3d.py", line 143, in run_command
    raise Exception("Failure executing: " + command + " :::out::: " + out + " :::err::: " + err)
Exception: Failure executing: omero import -s ves-ebi-8c -u username -w password -d 1 /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-6371/emd_6371-top.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-6371/emd_6371-front.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-6371/emd_6371-side.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-6371/emd_6371-top-thumb.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-6371/emd_6371-front-thumb.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-6371/emd_6371-side-thumb.map :::out::: Previously logged in to ves-ebi-8c:4064 as root
 :::err::: Created session d445f669-e455-47b8-9458-eb4d80965aa9 (root at ves-ebi-8c:4064). Idle timeout: 10 min. Current group: system
2016-05-06 01:25:51,089 1837       [      main] ERROR         ome.formats.importer.ImportConfig - Error flushing preferences
2016-05-06 01:25:51,102 1850       [      main] INFO          ome.formats.importer.ImportConfig - OMERO Version: 5.1.4-ice35-b55
2016-05-06 01:25:51,119 1867       [      main] INFO          ome.formats.importer.ImportConfig - Bioformats version: 5.1.4 revision: 05840624ab3d1d1dca14d1ccfebabcb61c42ec27 date: 3 September 2015
2016-05-06 01:25:51,129 1877       [      main] INFO   formats.importer.cli.CommandLineImporter - Log levels -- Bio-Formats: ERROR OMERO.importer: INFO
2016-05-06 01:25:51,763 2511       [      main] INFO      ome.formats.importer.ImportCandidates - Depth: 4 Metadata Level: MINIMUM
2016-05-06 01:25:52,410 3158       [      main] INFO      ome.formats.importer.ImportCandidates - 6 file(s) parsed into 6 group(s) with 6 call(s) to setId in 481ms. (647ms total) [0 unknowns]
2016-05-06 01:25:52,660 3408       [      main] INFO       ome.formats.OMEROMetadataStoreClient - Attempting initial SSL connection to ves-ebi-8c:4064
2016-05-06 01:25:53,588 4336       [      main] INFO       ome.formats.OMEROMetadataStoreClient - Insecure connection requested, falling back
2016-05-06 01:25:54,851 5599       [      main] INFO       ome.formats.OMEROMetadataStoreClient - Server: 5.1.4
2016-05-06 01:25:54,852 5600       [      main] INFO       ome.formats.OMEROMetadataStoreClient - Client: 5.1.4-ice35-b55
2016-05-06 01:25:54,852 5600       [      main] INFO       ome.formats.OMEROMetadataStoreClient - Java Version: 1.6.0_33
2016-05-06 01:25:54,852 5600       [      main] INFO       ome.formats.OMEROMetadataStoreClient - OS Name: Linux
2016-05-06 01:25:54,852 5600       [      main] INFO       ome.formats.OMEROMetadataStoreClient - OS Arch: amd64
2016-05-06 01:25:54,852 5600       [      main] INFO       ome.formats.OMEROMetadataStoreClient - OS Version: 2.6.32-504.1.3.el6.x86_64
2016-05-06 01:25:55,014 5762       [      main] INFO       ome.formats.OMEROMetadataStoreClient - Call context: {omero.group:0}
2016-05-06 01:25:55,047 5795       [      main] INFO   ormats.importer.cli.LoggingImportMonitor - FILESET_UPLOAD_PREPARATION
2016-05-06 01:25:56,952 7700       [      main] INFO   ormats.importer.cli.LoggingImportMonitor - FILESET_UPLOAD_START
2016-05-06 01:25:56,966 7714       [      main] INFO   ts.importer.transfers.UploadFileTransfer - Transferring /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries/EMD-6371/emd_6371-top.map...
2016-05-06 01:25:56,997 7745       [      main] INFO   ormats.importer.cli.LoggingImportMonitor - FILE_UPLOAD_STARTED: /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries/EMD-6371/emd_6371-top.map
06-May-2016 01:26:21 java.util.prefs.FileSystemPreferences syncWorld
WARNING: Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
06-May-2016 01:26:51 java.util.prefs.FileSystemPreferences syncWorld
WARNING: Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
06-May-2016 01:27:21 java.util.prefs.FileSystemPreferences syncWorld
WARNING: Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
06-May-2016 01:27:51 java.util.prefs.FileSystemPreferences syncWorld
WARNING: Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
06-May-2016 01:28:21 java.util.prefs.FileSystemPreferences syncWorld
WARNING: Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
.......
06-May-2016 03:27:51 java.util.prefs.FileSystemPreferences syncWorld
WARNING: Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
06-May-2016 03:28:21 java.util.prefs.FileSystemPreferences syncWorld
WARNING: Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
06-May-2016 03:28:51 java.util.prefs.FileSystemPreferences syncWorld
WARNING: Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
2016-05-06 03:28:56,775 7387523    [1-thread-1] ERROR  me.formats.importer.util.ClientKeepAlive - Exception while executing ping(), logging Connector out:
java.lang.RuntimeException: Ice.ConnectionLostException
    error = 0
        at ome.formats.OMEROMetadataStoreClient.ping(OMEROMetadataStoreClient.java:763) ~[blitz.jar:na]
        at ome.formats.importer.util.ClientKeepAlive.run(ClientKeepAlive.java:69) ~[blitz.jar:na]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.6.0_33]
        at java.lang.Thread.run(Thread.java:701) ~[na:1.6.0_33]
Caused by: Ice.ConnectionLostException: java.io.IOException: Connection reset by peer
        at IceInternal.Outgoing.invoke(Outgoing.java:158) ~[ice.jar:na]
        at omero.api._ServiceFactoryDelM.keepAllAlive(_ServiceFactoryDelM.java:1465) ~[blitz.jar:na]
        at omero.api.ServiceFactoryPrxHelper.keepAllAlive(ServiceFactoryPrxHelper.java:5085) ~[blitz.jar:na]
        at omero.api.ServiceFactoryPrxHelper.keepAllAlive(ServiceFactoryPrxHelper.java:5044) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.ping(OMEROMetadataStoreClient.java:756) ~[blitz.jar:na]
        ... 9 common frames omitted
Caused by: java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcher.read0(Native Method) ~[na:1.6.0_33]
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:1.6.0_33]
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) ~[na:1.6.0_33]
        at sun.nio.ch.IOUtil.read(IOUtil.java:224) ~[na:1.6.0_33]
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) ~[na:1.6.0_33]
        at IceInternal.TcpTransceiver.read(TcpTransceiver.java:245) ~[ice.jar:na]
        at Ice.ConnectionI.message(ConnectionI.java:996) ~[ice.jar:na]
        at IceInternal.ThreadPool.run(ThreadPool.java:321) ~[ice.jar:na]
        at IceInternal.ThreadPool.access$300(ThreadPool.java:12) ~[ice.jar:na]
        at IceInternal.ThreadPool$EventHandlerThread.run(ThreadPool.java:693) ~[ice.jar:na]
        ... 1 common frames omitted
2016-05-06 03:28:56,957 7387705    [1-thread-1] WARN       ome.formats.OMEROMetadataStoreClient - Exception closing d445f669-e455-47b8-9458-eb4d80965aa9/9b3d17d2-83d9-4877-86f9-e2304b4c03bdomero.api.RawFileStore -t -e 1.0:tcp -h 10.3.2.140 -p 41474
Ice.ConnectionLostException: java.io.IOException: Connection reset by peer
        at IceInternal.ConnectRequestHandler.getConnection(ConnectRequestHandler.java:244) ~[ice.jar:na]
        at IceInternal.ConnectRequestHandler.sendRequest(ConnectRequestHandler.java:141) ~[ice.jar:na]
        at IceInternal.Outgoing.invoke(Outgoing.java:77) ~[ice.jar:na]
        at omero.api._RawFileStoreDelM.close(_RawFileStoreDelM.java:459) ~[blitz.jar:na]
        at omero.api.RawFileStorePrxHelper.close(RawFileStorePrxHelper.java:1877) ~[blitz.jar:na]
        at omero.api.RawFileStorePrxHelper.close(RawFileStorePrxHelper.java:1839) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeQuietly(OMEROMetadataStoreClient.java:1054) [blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeServices(OMEROMetadataStoreClient.java:1075) [blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.logout(OMEROMetadataStoreClient.java:1101) [blitz.jar:na]
        at ome.formats.importer.util.ClientKeepAlive.run(ClientKeepAlive.java:78) ~[blitz.jar:na]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.6.0_33]
        at java.lang.Thread.run(Thread.java:701) ~[na:1.6.0_33]
Caused by: java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcher.read0(Native Method) ~[na:1.6.0_33]
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:1.6.0_33]
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) ~[na:1.6.0_33]
        at sun.nio.ch.IOUtil.read(IOUtil.java:224) ~[na:1.6.0_33]
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) ~[na:1.6.0_33]
        at IceInternal.TcpTransceiver.read(TcpTransceiver.java:245) ~[ice.jar:na]
        at Ice.ConnectionI.message(ConnectionI.java:996) ~[ice.jar:na]
        at IceInternal.ThreadPool.run(ThreadPool.java:321) ~[ice.jar:na]
        at IceInternal.ThreadPool.access$300(ThreadPool.java:12) ~[ice.jar:na]
        at IceInternal.ThreadPool$EventHandlerThread.run(ThreadPool.java:693) ~[ice.jar:na]
        ... 1 common frames omitted
2016-05-06 03:28:56,957 7387705    [      main] ERROR     ome.formats.importer.cli.ErrorHandler - FILE_EXCEPTION: /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries/EMD-6371/emd_6371-top.map
Ice.ConnectionLostException: java.io.IOException: Connection reset by peer
        at Ice.ConnectionI.sendAsyncRequest(ConnectionI.java:391) ~[ice.jar:na]
        at IceInternal.ConnectionRequestHandler.sendAsyncRequest(ConnectionRequestHandler.java:53) ~[ice.jar:na]
        at IceInternal.OutgoingAsync.__send(OutgoingAsync.java:400) ~[ice.jar:na]
        at Ice.RouterPrxHelper.begin_addProxies(RouterPrxHelper.java:184) ~[ice.jar:na]
        at Ice.RouterPrxHelper.begin_addProxies(RouterPrxHelper.java:158) ~[ice.jar:na]
        at IceInternal.RouterInfo.addProxy(RouterInfo.java:179) ~[ice.jar:na]
        at IceInternal.ConnectRequestHandler.setConnection(ConnectRequestHandler.java:274) ~[ice.jar:na]
        at IceInternal.RoutableReference$3.setConnection(RoutableReference.java:922) ~[ice.jar:na]
        at IceInternal.OutgoingConnectionFactory.create(OutgoingConnectionFactory.java:324) ~[ice.jar:na]
        at IceInternal.RoutableReference.createConnection(RoutableReference.java:907) ~[ice.jar:na]
        at IceInternal.RoutableReference$1.setEndpoints(RoutableReference.java:542) ~[ice.jar:na]
        at IceInternal.RouterInfo.getClientEndpoints(RouterInfo.java:98) ~[ice.jar:na]
        at IceInternal.RoutableReference.getConnection(RoutableReference.java:534) ~[ice.jar:na]
        at IceInternal.ConnectRequestHandler.connect(ConnectRequestHandler.java:43) ~[ice.jar:na]
        at Ice._ObjectDelM.setup(_ObjectDelM.java:264) ~[ice.jar:na]
        at Ice.ObjectPrxHelperBase.createDelegate(ObjectPrxHelperBase.java:2299) ~[ice.jar:na]
        at Ice.ObjectPrxHelperBase.__getDelegate(ObjectPrxHelperBase.java:2204) ~[ice.jar:na]
        at omero.api.RawFileStorePrxHelper.close(RawFileStorePrxHelper.java:1875) ~[blitz.jar:na]
        at omero.api.RawFileStorePrxHelper.close(RawFileStorePrxHelper.java:1839) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeQuietly(OMEROMetadataStoreClient.java:1054) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeServices(OMEROMetadataStoreClient.java:1075) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.logout(OMEROMetadataStoreClient.java:1101) ~[blitz.jar:na]
        at ome.formats.importer.util.ClientKeepAlive.run(ClientKeepAlive.java:78) ~[blitz.jar:na]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.6.0_33]
        at java.lang.Thread.run(Thread.java:701) ~[na:1.6.0_33]
Caused by: java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcher.read0(Native Method) ~[na:1.6.0_33]
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:1.6.0_33]
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) ~[na:1.6.0_33]
        at sun.nio.ch.IOUtil.read(IOUtil.java:224) ~[na:1.6.0_33]
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) ~[na:1.6.0_33]
        at IceInternal.TcpTransceiver.read(TcpTransceiver.java:245) ~[ice.jar:na]
        at Ice.ConnectionI.message(ConnectionI.java:996) ~[ice.jar:na]
        at IceInternal.ThreadPool.run(ThreadPool.java:321) ~[ice.jar:na]
        at IceInternal.ThreadPool.access$300(ThreadPool.java:12) ~[ice.jar:na]
        at IceInternal.ThreadPool$EventHandlerThread.run(ThreadPool.java:693) ~[ice.jar:na]
        ... 1 common frames omitted
2016-05-06 03:28:56,960 7387708    [      main] ERROR        ome.formats.importer.ImportLibrary - Error on import
Ice.ConnectionLostException: java.io.IOException: Connection reset by peer
        at Ice.ConnectionI.sendAsyncRequest(ConnectionI.java:391) ~[ice.jar:na]
        at IceInternal.ConnectionRequestHandler.sendAsyncRequest(ConnectionRequestHandler.java:53) ~[ice.jar:na]
        at IceInternal.OutgoingAsync.__send(OutgoingAsync.java:400) ~[ice.jar:na]
        at Ice.RouterPrxHelper.begin_addProxies(RouterPrxHelper.java:184) ~[ice.jar:na]
        at Ice.RouterPrxHelper.begin_addProxies(RouterPrxHelper.java:158) ~[ice.jar:na]
        at IceInternal.RouterInfo.addProxy(RouterInfo.java:179) ~[ice.jar:na]
        at IceInternal.ConnectRequestHandler.setConnection(ConnectRequestHandler.java:274) ~[ice.jar:na]
        at IceInternal.RoutableReference$3.setConnection(RoutableReference.java:922) ~[ice.jar:na]
        at IceInternal.OutgoingConnectionFactory.create(OutgoingConnectionFactory.java:324) ~[ice.jar:na]
        at IceInternal.RoutableReference.createConnection(RoutableReference.java:907) ~[ice.jar:na]
        at IceInternal.RoutableReference$1.setEndpoints(RoutableReference.java:542) ~[ice.jar:na]
        at IceInternal.RouterInfo.getClientEndpoints(RouterInfo.java:98) ~[ice.jar:na]
        at IceInternal.RoutableReference.getConnection(RoutableReference.java:534) ~[ice.jar:na]
        at IceInternal.ConnectRequestHandler.connect(ConnectRequestHandler.java:43) ~[ice.jar:na]
        at Ice._ObjectDelM.setup(_ObjectDelM.java:264) ~[ice.jar:na]
        at Ice.ObjectPrxHelperBase.createDelegate(ObjectPrxHelperBase.java:2299) ~[ice.jar:na]
        at Ice.ObjectPrxHelperBase.__getDelegate(ObjectPrxHelperBase.java:2204) ~[ice.jar:na]
        at omero.api.RawFileStorePrxHelper.close(RawFileStorePrxHelper.java:1875) ~[blitz.jar:na]
        at omero.api.RawFileStorePrxHelper.close(RawFileStorePrxHelper.java:1839) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeQuietly(OMEROMetadataStoreClient.java:1054) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeServices(OMEROMetadataStoreClient.java:1075) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.logout(OMEROMetadataStoreClient.java:1101) ~[blitz.jar:na]
        at ome.formats.importer.util.ClientKeepAlive.run(ClientKeepAlive.java:78) ~[blitz.jar:na]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.6.0_33]
        at java.lang.Thread.run(Thread.java:701) ~[na:1.6.0_33]
Caused by: java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcher.read0(Native Method) ~[na:1.6.0_33]
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:1.6.0_33]
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) ~[na:1.6.0_33]
        at sun.nio.ch.IOUtil.read(IOUtil.java:224) ~[na:1.6.0_33]
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) ~[na:1.6.0_33]
        at IceInternal.TcpTransceiver.read(TcpTransceiver.java:245) ~[ice.jar:na]
        at Ice.ConnectionI.message(ConnectionI.java:996) ~[ice.jar:na]
        at IceInternal.ThreadPool.run(ThreadPool.java:321) ~[ice.jar:na]
        at IceInternal.ThreadPool.access$300(ThreadPool.java:12) ~[ice.jar:na]
        at IceInternal.ThreadPool$EventHandlerThread.run(ThreadPool.java:693) ~[ice.jar:na]
        ... 1 common frames omitted
2016-05-06 03:28:56,960 7387708    [      main] INFO         ome.formats.importer.ImportLibrary - Exiting on error

==> Summary
0 files uploaded, 0 filesets created, 0 images imported, 1 error in 2:03:01.994
2016-05-06 03:28:56,978 7387726    [      main] WARN       ome.formats.OMEROMetadataStoreClient - Exception closing d445f669-e455-47b8-9458-eb4d80965aa9/319e4640-cc8d-4ba9-9736-09803697573eomero.api.RawPixelsStore -t -e 1.0:tcp -h 10.3.2.140 -p 41474
Ice.ConnectionLostException: null
        at IceInternal.ConnectRequestHandler.getConnection(ConnectRequestHandler.java:244) ~[ice.jar:na]
        at IceInternal.ConnectRequestHandler.sendRequest(ConnectRequestHandler.java:141) ~[ice.jar:na]
        at IceInternal.Outgoing.invoke(Outgoing.java:77) ~[ice.jar:na]
        at omero.api._RawPixelsStoreDelM.close(_RawPixelsStoreDelM.java:2003) ~[blitz.jar:na]
        at omero.api.RawPixelsStorePrxHelper.close(RawPixelsStorePrxHelper.java:9946) ~[blitz.jar:na]
        at omero.api.RawPixelsStorePrxHelper.close(RawPixelsStorePrxHelper.java:9908) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeQuietly(OMEROMetadataStoreClient.java:1054) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeServices(OMEROMetadataStoreClient.java:1078) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.logout(OMEROMetadataStoreClient.java:1101) ~[blitz.jar:na]
        at ome.formats.importer.util.ClientKeepAlive.run(ClientKeepAlive.java:78) ~[blitz.jar:na]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.6.0_33]
        at java.lang.Thread.run(Thread.java:701) ~[na:1.6.0_33]
2016-05-06 03:28:56,978 7387726    [1-thread-1] WARN       ome.formats.OMEROMetadataStoreClient - Exception closing d445f669-e455-47b8-9458-eb4d80965aa9/319e4640-cc8d-4ba9-9736-09803697573eomero.api.RawPixelsStore -t -e 1.0:tcp -h 10.3.2.140 -p 41474
Ice.ConnectionLostException: null
        at IceInternal.ConnectRequestHandler.getConnection(ConnectRequestHandler.java:244) ~[ice.jar:na]
        at IceInternal.ConnectRequestHandler.sendRequest(ConnectRequestHandler.java:141) ~[ice.jar:na]
        at IceInternal.Outgoing.invoke(Outgoing.java:77) ~[ice.jar:na]
        at omero.api._RawPixelsStoreDelM.close(_RawPixelsStoreDelM.java:2003) ~[blitz.jar:na]
        at omero.api.RawPixelsStorePrxHelper.close(RawPixelsStorePrxHelper.java:9946) ~[blitz.jar:na]
        at omero.api.RawPixelsStorePrxHelper.close(RawPixelsStorePrxHelper.java:9908) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeQuietly(OMEROMetadataStoreClient.java:1054) [blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeServices(OMEROMetadataStoreClient.java:1078) [blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.logout(OMEROMetadataStoreClient.java:1101) [blitz.jar:na]
        at ome.formats.importer.util.ClientKeepAlive.run(ClientKeepAlive.java:78) ~[blitz.jar:na]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.6.0_33]
        at java.lang.Thread.run(Thread.java:701) ~[na:1.6.0_33]
2016-05-06 03:28:56,982 7387730    [1-thread-1] WARN       ome.formats.OMEROMetadataStoreClient - Exception closing d445f669-e455-47b8-9458-eb4d80965aa9/f6248e36-5a43-48ab-8b88-2e91c58488b4omero.api.ThumbnailStore -t -e 1.0:tcp -h 10.3.2.140 -p 41474
Ice.ConnectionLostException: null
        at IceInternal.ConnectRequestHandler.getConnection(ConnectRequestHandler.java:244) ~[ice.jar:na]
        at IceInternal.ConnectRequestHandler.sendRequest(ConnectRequestHandler.java:141) ~[ice.jar:na]
        at IceInternal.Outgoing.invoke(Outgoing.java:77) ~[ice.jar:na]
        at omero.api._ThumbnailStoreDelM.close(_ThumbnailStoreDelM.java:74) ~[blitz.jar:na]
        at omero.api.ThumbnailStorePrxHelper.close(ThumbnailStorePrxHelper.java:356) ~[blitz.jar:na]
        at omero.api.ThumbnailStorePrxHelper.close(ThumbnailStorePrxHelper.java:318) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeQuietly(OMEROMetadataStoreClient.java:1054) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeServices(OMEROMetadataStoreClient.java:1081) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.logout(OMEROMetadataStoreClient.java:1101) ~[blitz.jar:na]
        at ome.formats.importer.cli.CommandLineImporter.cleanup(CommandLineImporter.java:291) ~[blitz.jar:na]
        at ome.formats.importer.cli.CommandLineImporter.main(CommandLineImporter.java:864) ~[blitz.jar:na]
2016-05-06 03:28:56,982 7387730    [      main] WARN       ome.formats.OMEROMetadataStoreClient - Exception closing d445f669-e455-47b8-9458-eb4d80965aa9/f6248e36-5a43-48ab-8b88-2e91c58488b4omero.api.ThumbnailStore -t -e 1.0:tcp -h 10.3.2.140 -p 41474
Ice.ConnectionLostException: null
        at IceInternal.ConnectRequestHandler.getConnection(ConnectRequestHandler.java:244) ~[ice.jar:na]
        at IceInternal.ConnectRequestHandler.sendRequest(ConnectRequestHandler.java:141) ~[ice.jar:na]
        at IceInternal.Outgoing.invoke(Outgoing.java:77) ~[ice.jar:na]
        at omero.api._ThumbnailStoreDelM.close(_ThumbnailStoreDelM.java:74) ~[blitz.jar:na]
        at omero.api.ThumbnailStorePrxHelper.close(ThumbnailStorePrxHelper.java:356) ~[blitz.jar:na]
        at omero.api.ThumbnailStorePrxHelper.close(ThumbnailStorePrxHelper.java:318) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeQuietly(OMEROMetadataStoreClient.java:1054) [blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeServices(OMEROMetadataStoreClient.java:1081) [blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.logout(OMEROMetadataStoreClient.java:1101) [blitz.jar:na]
        at ome.formats.importer.cli.CommandLineImporter.cleanup(CommandLineImporter.java:291) ~[blitz.jar:na]
        at ome.formats.importer.cli.CommandLineImporter.main(CommandLineImporter.java:864) ~[blitz.jar:na]
2016-05-06 03:28:56,985 7387733    [      main] WARN       ome.formats.OMEROMetadataStoreClient - Exception closing d445f669-e455-47b8-9458-eb4d80965aa9/9b7c15fd-5f0d-4c0a-95f1-dbf621a345cfomero.api.MetadataStore -t -e 1.0:tcp -h 10.3.2.140 -p 41474
Ice.ConnectionLostException: java.io.IOException: Connection reset by peer
        at IceInternal.Outgoing.invoke(Outgoing.java:158) ~[ice.jar:na]
        at omero.api._MetadataStoreDelM.close(_MetadataStoreDelM.java:412) ~[blitz.jar:na]
        at omero.api.MetadataStorePrxHelper.close(MetadataStorePrxHelper.java:1560) ~[blitz.jar:na]
        at omero.api.MetadataStorePrxHelper.close(MetadataStorePrxHelper.java:1522) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeQuietly(OMEROMetadataStoreClient.java:1054) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeServices(OMEROMetadataStoreClient.java:1084) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.logout(OMEROMetadataStoreClient.java:1101) ~[blitz.jar:na]
        at ome.formats.importer.util.ClientKeepAlive.run(ClientKeepAlive.java:78) ~[blitz.jar:na]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.6.0_33]
        at java.lang.Thread.run(Thread.java:701) ~[na:1.6.0_33]
Caused by: java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcher.read0(Native Method) ~[na:1.6.0_33]
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:1.6.0_33]
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) ~[na:1.6.0_33]
        at sun.nio.ch.IOUtil.read(IOUtil.java:224) ~[na:1.6.0_33]
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) ~[na:1.6.0_33]
        at IceInternal.TcpTransceiver.read(TcpTransceiver.java:245) ~[ice.jar:na]
        at Ice.ConnectionI.message(ConnectionI.java:996) ~[ice.jar:na]
        at IceInternal.ThreadPool.run(ThreadPool.java:321) ~[ice.jar:na]
        at IceInternal.ThreadPool.access$300(ThreadPool.java:12) ~[ice.jar:na]
        at IceInternal.ThreadPool$EventHandlerThread.run(ThreadPool.java:693) ~[ice.jar:na]
        ... 1 common frames omitted
2016-05-06 03:28:56,985 7387733    [1-thread-1] WARN       ome.formats.OMEROMetadataStoreClient - Exception closing d445f669-e455-47b8-9458-eb4d80965aa9/9b7c15fd-5f0d-4c0a-95f1-dbf621a345cfomero.api
.MetadataStore -t -e 1.0:tcp -h 10.3.2.140 -p 41474
Ice.ConnectionLostException: java.io.IOException: Connection reset by peer
        at IceInternal.Outgoing.invoke(Outgoing.java:158) ~[ice.jar:na]
        at omero.api._MetadataStoreDelM.close(_MetadataStoreDelM.java:412) ~[blitz.jar:na]
        at omero.api.MetadataStorePrxHelper.close(MetadataStorePrxHelper.java:1560) ~[blitz.jar:na]
        at omero.api.MetadataStorePrxHelper.close(MetadataStorePrxHelper.java:1522) ~[blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeQuietly(OMEROMetadataStoreClient.java:1054) [blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.closeServices(OMEROMetadataStoreClient.java:1084) [blitz.jar:na]
        at ome.formats.OMEROMetadataStoreClient.logout(OMEROMetadataStoreClient.java:1101) [blitz.jar:na]
        at ome.formats.importer.util.ClientKeepAlive.run(ClientKeepAlive.java:78) ~[blitz.jar:na]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351) ~[na:1.6.0_33]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165) ~[na:1.6.0_33]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) ~[na:1.6.0_33]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.6.0_33]
        at java.lang.Thread.run(Thread.java:701) ~[na:1.6.0_33]
Caused by: java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcher.read0(Native Method) ~[na:1.6.0_33]
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:1.6.0_33]
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) ~[na:1.6.0_33]
        at sun.nio.ch.IOUtil.read(IOUtil.java:224) ~[na:1.6.0_33]
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) ~[na:1.6.0_33]
        at IceInternal.TcpTransceiver.read(TcpTransceiver.java:245) ~[ice.jar:na]
        at Ice.ConnectionI.message(ConnectionI.java:996) ~[ice.jar:na]
        at IceInternal.ThreadPool.run(ThreadPool.java:321) ~[ice.jar:na]
        at IceInternal.ThreadPool.access$300(ThreadPool.java:12) ~[ice.jar:na]
        at IceInternal.ThreadPool$EventHandlerThread.run(ThreadPool.java:693) ~[ice.jar:na]
        ... 1 common frames omitted
-! 05/06/16 03:28:58.575 warning: Proxy keep alive failed.

Traceback (most recent call last):
  File "/nfs/msd/em/crontabs/prod/empiar3d/importEmpiar3d.py", line 83, in processEntries
    importImageIntoOMERO(e3d, entryName, inputDir, errorDir, extension, deployment)
  File "/nfs/msd/em/crontabs/prod/empiar3d/importEmpiar3d.py", line 59, in importImageIntoOMERO
    e3d.run_command('omero import -s {0} -u {1} -w {2} -d {3} {4} {5} {6} {7} {8} {9}'.format(e3d.omeroHost, e3d.omeroUser, e3d.omeroPassword, e3d.omeroDataset, topImage, frontImage, sideImage, topThumbImage, frontThumbImage, sideThumbImage), f_stdLog)
  File "/nfs/msd/em/crontabs/prod/empiar3d/empiar3d.py", line 143, in run_command
    raise Exception("Failure executing: " + command + " :::out::: " + out + " :::err::: " + err)
Exception: Failure executing: omero import -s ves-ebi-8c -u username -w password -d 1 /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-5679/emd_5679-top.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-5679/emd_5679-front.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-5679/emd_5679-side.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-5679/emd_5679-top-thumb.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-5679/emd_5679-front-thumb.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-5679/emd_5679-side-thumb.map :::out::: Previous session expired for root on ves-ebi-8c:4064
 :::err::: InternalException: Failed to connect: Ice.ConnectionLostException:
Connection reset by peer

Traceback (most recent call last):
  File "/nfs/msd/em/crontabs/prod/empiar3d/importEmpiar3d.py", line 83, in processEntries
    importImageIntoOMERO(e3d, entryName, inputDir, errorDir, extension, deployment)
  File "/nfs/msd/em/crontabs/prod/empiar3d/importEmpiar3d.py", line 59, in importImageIntoOMERO
    e3d.run_command('omero import -s {0} -u {1} -w {2} -d {3} {4} {5} {6} {7} {8} {9}'.format(e3d.omeroHost, e3d.omeroUser, e3d.omeroPassword, e3d.omeroDataset, topImage, frontImage, sideImage, topThumbImage, frontThumbImage, sideThumbImage), f_stdLog)
  File "/nfs/msd/em/crontabs/prod/empiar3d/empiar3d.py", line 143, in run_command
    raise Exception("Failure executing: " + command + " :::out::: " + out + " :::err::: " + err)
Exception: Failure executing: omero import -s ves-ebi-8c -u username -w password -d 1 /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-5679/emd_5679-top.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-5679/emd_5679-front.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-5679/emd_5679-side.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-5679/emd_5679-top-thumb.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-5679/emd_5679-front-thumb.map /nfs/public/rw/pdbe/mol2cell/test-data/empiar3d-entries//EMD-5679/emd_5679-side-thumb.map :::out::: Previous session expired for root on ves-ebi-8c:4064
 :::err::: InternalException: Failed to connect: Ice.ConnectionLostException:
Connection reset by peer


More information about the ome-devel mailing list