[ome-users] OMERO with multiple data directories
Benjamin Schmid
benjamin.schmid at fau.de
Tue Mar 27 10:03:53 BST 2018
Hi again,
as I have no experience with LVM, are there any caveats
using/configuring it?
The only parameter I came across is the PE size, and obviously it
doesn't matter too much in lvm2.
Is there anything else I should pay attention to?
Thank you very much,
Bene
Am 26.03.2018 um 16:39 schrieb Benjamin Schmid:
> Hi Josh,
>
> Thanks for your answers. Good to hear that you have experience (and
> obviously not bad ones) with LVM. This is somewhat my favourite since
> it ensure scalability in the future.
>
> The size of the Dropbox is currently 4.7 TB, the ManagedRepository is
> 11 TB.
>
> After the expansion (and after creating a 2nd iSCSI LUN), I have
> basically 2 partitions with 16 TB each, one is more or less full, the
> other one is more or less empty.
>
> There are ca. 200 user folders.
>
> Thanks a lot,
> Bene
>
>
>
>
>
> Am 26.03.2018 um 16:22 schrieb Josh Moore:
>> On Mon, Mar 26, 2018 at 10:12 AM, Benjamin Schmid
>> <benjamin.schmid at fau.de> wrote:
>>> Dear all,
>> Hi Bene,
>>
>>
>>> Sorry for the lengthy mail, and thanks to those who read it fully to
>>> the end
>>> ;)
>> Exposition in emails is encouraged.
>>
>>
>>> So far, we have been running OMERO (and the Dropbox) on an Ubuntu
>>> server
>>> with an attached Thecus storage array (N16000pro) with 16 TB storage
>>> space.
>>> The Thecus machine is connected to the server via iSCSI and hosts
>>> (amongst
>>> others) the users' home and OMERO Dropbox folders. These are shared via
>>> Samba to the microscope computers.
>>>
>>> Because storage was filling up, we expanded the RAID volume on the
>>> Thecus
>>> machine. Afterwards, I also wanted to expand the iSCSI LUN. That's were
>>> trouble started because I realized that the maximum LUN size on the
>>> Thecus
>>> system is 16 TB. I can create another LUN, but this will basically
>>> end up as
>>> a second partition on the server. My question is now whether OMERO
>>> can use
>>> multiple data directories, or if there is another solution to this
>>> problem.
>> In general, yes, OMERO can use multiple directories but there are, as
>> always, caveats. Regarding other technical, non-OMERO solutions, I'll
>> defer to the community.
>>
>>
>>
>>> What I thought about so far:
>>>
>>> * Put some of the users (the ones that occupy most storage sapce) on
>>> the 2nd
>>> partition
>>> Create symbolic links in both the Dropbox and ManagedRepository
>>> folders that
>>> point to the respective folders for these users on the 2nd partition:
>>> ManagedRepository/user1 -> /partition2/ManagedRepository/user1
>>> DropBox/user1 -> /partition2/DropBox/user1
>>> However, it seems OMERO Dropbox does not follow symbolic links (is
>>> there any
>>> way to make this work?)
>> DropBox needs to have notifications of what's going on. So if DropBox
>> is watching /old_storage and the notification comes in on
>> /new_storage, then yes, DropBox won't be aware of it. One option would
>> be to run multiple DropBox servers. Another would be to configure
>> DropBox for the new location on a pure user basis, which is
>> theoretically doable but neither well-tested nor sysadmin-friendly.
>>
>>
>>> * Leaving the DropBox in the primary OMERO data folder, and only
>>> moving the
>>> ManagedRepository folder of some users to the 2nd partition
>>> Does not work because importing from the DropBox is done via hard links
>>> (which I very much appreciate), and hard links cannot cross file system
>>> borders.
>> Understood. This is valuable feedback.
>>
>>
>>> * mhddfs (https://romanrm.net/mhddfs)
>>> I found this small Linux tool that basically joins several
>>> filesystems into
>>> a (virtual) large partition. However, it seems to have some impact on
>>> performance, and, more severely, does also not create correct hard
>>> links
>>> when importing via OMERO DropBox.
>> This is new for me.
>>
>>
>>> * LVM
>>> This could also be used for combining several partitions into a
>>> single big
>>> one. Advantages over mhddfs is that it's integrated in the Linux
>>> kernel.
>>> Disadvantage is that the used partitions need to be formatted for
>>> LVM, so
>>> unlike mhddfs, it doesn't work with existing partitions (with existing
>>> data). I could however initialize the new partition with LVM, copy
>>> data from
>>> the existing LUN onto it, free the first partition and then add it
>>> to the
>>> LVM managed volume. Downside: data copying will take a lot of time,
>>> and I
>>> have no experience with LVM, in particular I do not know whether hard
>>> linking will work properly. Also I have no idea how LVM would impact
>>> performance. Maybe somebody can provide some information about this.
>> Perhaps someone will chime in with the specifics of LVM in your
>> scenario, but we *do* use LVM on most if not all OME team-managed
>> systems. I haven't seen any hard-linking issues with LVM.
>>
>>
>>> * Going away from iSCSI, instead share the entire RAID volume via
>>> NFS, which
>>> is then mounted on the server (and re-shared via Samba to the
>>> microscope
>>> computers)
>>> However, I read a couple of times that re-sharing an NFS mount via
>>> Samba
>>> causes trouble and is not recommended. Can anybody confirm this?
>> Without knowing more, I'd be concerned that you wouldn't get the
>> DropBox notifications that you need.
>>
>>
>>> * Giving up on the hard-linking import and make users delete their
>>> data in
>>> their DropBox folders once it's imported.
>>> Not really nice.
>> Have you looked at any of the "move" strategies? Do you have something
>> internally that would work like `rsync --delete-source` that you would
>> trust?
>>
>>
>>> * Giving up OMERO Dropbox and make users use OMERO.insight to import
>>> the
>>> acquired data.
>>> Not really nice.
>>>
>>>
>>> Has anybody had a similar problem in the past?
>>> What is the preferred way to solve this?
>>> Have I overseen anything obvious to make this work?
>>> I'm not really happy with any of the things I outlined above.
>> If you don't mind, could you share with us approximate sizes for
>> ManagedRepository and DropBox (and other large directories) that you
>> are looking to re-arrange? That along with how many users and
>> used/free sizes of your various mount points might help to suggest
>> something.
>>
>> All the best,
>> ~Josh.
>>
>>
>>
>>> Thank you very much in advance,
>>> Benjamin
>> _______________________________________________
>> ome-users mailing list
>> ome-users at lists.openmicroscopy.org.uk
>> http://lists.openmicroscopy.org.uk/mailman/listinfo/ome-users
>
--
Optical Imaging Centre Erlangen
Hartmannstr. 14
91052 Erlangen, Germany
http://www.oice.erlangen.de
More information about the ome-users
mailing list