[ome-users] OMERO at big sites - delegated administration + an interesting object-storage idea.

Jake Carroll jake.carroll at uq.edu.au
Thu Jun 18 11:32:28 BST 2015


Hi list!

We’re running OMERO at a fairly large scale now (big ingest, lots of instruments, plenty of IO and lots of compute cycles) across a significant network. We’re (as Jason alluded to in a previous post) doing things at a cloud scale with OMERO, which, still seems to be a bit unusual from what I have found.

Anyway..

One thing that has come up recently is the notion of delegated administration. Here is an example.

Org Unit “A” is the controller/unit that runs an OMERO platform. It has lots of users and is the main provider of the OMERO platform.

Org Unit “B” says “hey…that is darn cool. We’d like some of the OMERO love, too! Can we join you?”

Org Unit “A” says: “Of course! We share the love, and we love OMERO!”.

In our LDAP binds we then allow said org unit access. But, I got to thinking a bit further afield about something better or even nicer. I liked the idea of multi-tenancy with my omero-cloud instance. Further, I liked the idea of my delegated administrators (as I like to call them) being in control of their own destiny, and to an extent, their users, such that, on a large omero instance, you’d have an effective waterfall model of administrative chains.

OU “A” can oversee it all.

OU “B” has some selected administrators that can access/modify/work with the OU “B” users who belong to that bit of the LDAP container (or some other access control mechanism).

It would sort of make OMERO very multi-tenancy savvy in many respects.

Further, with regards to OMERO.fs, it would be really ideal to be able to prefix or specify in the backend multiple points of IO exit for the omero data storage location. Such that the omero.data.dir variable could equal or have multiple backends for different OU’s from the same OMERO instance. This would both logically and physically compartmentalise the OMERO data domain. [Which could be a good thing for more reasons than one, much less IO scheduling and performance characteristics at a filesystem level, for different omero workload types].

Finally, I have been speaking with a colleague at Harvard about the semantics of parallel filesystem access, scale and the limitations of POSIX.

You know what would be really cool? If we could create an object-store provider backend for OMERO to tape into object storage APIs. I really *really* like the idea of being able to natively target OpenStack SWIFT buckets, Amazon S3 buckets and native Ceph-RADOS-gw stores. Thinking out loud, there is huge potential to scale omero in the cloud further, massive potential for data reuse and even further extensibility benefits we can derive from scaling out like this.

Just a few thoughts. Apologies for the idea overload. Just needed to get it down on the page for the list to think about/ponder and tell me “We’ve already done that Jake, don’t worry…it is in the pipeline for 5.2 or 5.3” etc.

Talk soon.

-jc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openmicroscopy.org.uk/pipermail/ome-users/attachments/20150618/ceaed08e/attachment.html>


More information about the ome-users mailing list