Last time I was talking about Arc, I mentioned that there was an issue with LCMAPS, relating to the bitness of the available libraries.
And that once a 64 bit LCMAPS library was available, that'd be it.
Well, as you might have infered from a very slight delay, there's just a teensy bit more to it than that.
64 bit libraries are now common place, and did, indeed resolve the problem we had. However, they just turned up more problems.
Cue one long, and rather frustrating search down the rabbit hole of shared library dependencies. The root problem was that nothing was defining a symbol 'getMajorVersionNumber()', or the minor or patch number versions. Finding what _should_ be doing that, and what those values aught to be was the tricky part. Perhaps that's more a symptom of my not having spent very much time debugging shared library issues, rather than a sign of a genuinely hard problem.
In the end, it's a known problem with the VOMS libraries, and it's not hard to correct for it in the small scale, by adding stub methods that return 0 in the application code, and compiling with -rdynamic.
However, translating that into something that works for ARC is non-trival. Recompling all of AREX to export functions to shared libraries is asking for trouble, given the size of the thing. It's also debatable whether it's the right thing to do to work around what's really a bug in the libraries themselves.
Fortunately, there is another option. Arc can call plugins to do pool account mapping, and these are small external programs. So writing a short wrapper around LCMAPS is straight forward, and then Arc delegates responsability to this plugin, which is a nice, self contained place to have the workarounds.
My version of such a plugin is here, and should be identified in the arc.conf as
unixgroup=mapplugin 5 arc-lcmap %D %P
This now lets us use the same pool account mapping and authorisation infrastructure with both gLite and Arc. In particular, this lets us open up the Arc CE to any of our normally supported VO's; as a option for them to explore. That's a topic I'll be working with some VO's on over the summer.
For the moment though, I need to dismantle the layer of auth systems hacks we were using for Arc.
Showing posts with label arc. Show all posts
Showing posts with label arc. Show all posts
Friday, May 06, 2011
Tuesday, November 17, 2009
Arc, authorisation and LCMAPS
As a gLite site, it would be ideal if we could have the same user mapping between certificate DN's, and unix user names that is used with our existing CE's.
Which means using the gLite LCMAPS to make decisions about what username each user has.
This is supported in Arc, but it's not in the same fashion.
The best approach appears to be: Have an initial mapping listed in the grid-mapfile (There's utilities to make this easy). This allows a first pass of authorisation. Then, in the gridFTP server, the mapping rules in there are applied next - this is where LCMAPS applies.
Interestingly, Arc makes it very easy to do the thing we found hard with LCMAPS - to have a small set of 'local' users with fixed permanent mappings (independant of VO), and VO based pool accounts for other users.
However, it's in the LCMAPS integration that things get a bit stuck.
It's a silly 32/64 bitness issue. On a 64 bit system, yum pulls out the 64bit Arc - as you might expect. Sadly, there's not a 64 bit version of LCMAPS in the repositories as yet.
So it's a case of hacking what I need out of etics. I'll post a recipe when I have one, but this is a pretty tempory situation - it looks like Oscar pretty much LCAS/LCMAPS ready, but they're not a separate package, so are waiting on the SCAS, CREAM or WMS SL5-64bit packages.
Which means using the gLite LCMAPS to make decisions about what username each user has.
This is supported in Arc, but it's not in the same fashion.
The best approach appears to be: Have an initial mapping listed in the grid-mapfile (There's utilities to make this easy). This allows a first pass of authorisation. Then, in the gridFTP server, the mapping rules in there are applied next - this is where LCMAPS applies.
Interestingly, Arc makes it very easy to do the thing we found hard with LCMAPS - to have a small set of 'local' users with fixed permanent mappings (independant of VO), and VO based pool accounts for other users.
However, it's in the LCMAPS integration that things get a bit stuck.
It's a silly 32/64 bitness issue. On a 64 bit system, yum pulls out the 64bit Arc - as you might expect. Sadly, there's not a 64 bit version of LCMAPS in the repositories as yet.
So it's a case of hacking what I need out of etics. I'll post a recipe when I have one, but this is a pretty tempory situation - it looks like Oscar pretty much LCAS/LCMAPS ready, but they're not a separate package, so are waiting on the SCAS, CREAM or WMS SL5-64bit packages.
Friday, November 06, 2009
Arc, and the installation
We've been fiddling with the NorduGrid Arc middleware a bit. Not just out of random curiosity, but more trying to get a handle on the workloads that it suits better than gLite, and vice versa. It does a number of things differently, and by running an Arc CE in parallel with an lcg-CE and CREAM, we can do some solid comparisons. Oh, and the name of the middleware is also much more amenable to puns, so expect a few groaners too.
So, consider this the first in a series. During this process, we expect to end up with a set of notes on how to install and run an Arc setup, for people already familiar with gLite.
Firstly, install. We took a blank SL5 box, added the nordugrid repo's, and then
yum groupinstall "ARC Server"
yum groupinstall "ARC Client"
Well, very nearly. There's one more thing needed, which is to add the EPEL dependancies (libVOMS is the key lib)
yum install yum-conf-epel
The next step is to configure it. That's all done in /etc/arc.conf, and is the subject for later posts.
There is a need for a filesystem shared between the CE and the worker nodes, so we fired up a spare disk server for NFS.
Startup is three systems, already configured in /etc/init.d : gridftp, grid-infosys and grid-manager.
Ta-da! A running Arc CE.
Ok, so there's a fair bit glossed over in the configuration step. Next time, I'll talk about how I configured it to work with our existing queues - and where the expectations for Arc differ from gLite.
So, consider this the first in a series. During this process, we expect to end up with a set of notes on how to install and run an Arc setup, for people already familiar with gLite.
Firstly, install. We took a blank SL5 box, added the nordugrid repo's, and then
yum groupinstall "ARC Server"
yum groupinstall "ARC Client"
Well, very nearly. There's one more thing needed, which is to add the EPEL dependancies (libVOMS is the key lib)
yum install yum-conf-epel
The next step is to configure it. That's all done in /etc/arc.conf, and is the subject for later posts.
There is a need for a filesystem shared between the CE and the worker nodes, so we fired up a spare disk server for NFS.
Startup is three systems, already configured in /etc/init.d : gridftp, grid-infosys and grid-manager.
Ta-da! A running Arc CE.
Ok, so there's a fair bit glossed over in the configuration step. Next time, I'll talk about how I configured it to work with our existing queues - and where the expectations for Arc differ from gLite.
Subscribe to:
Posts (Atom)