Thursday, December 1, 2011

Details on glexec improvements

My last blog post gave a quick overview of why glexec exists, what issues folks run into, and what we did to improve it.  Let's go into some details.

How Condor Update Works
The lcmaps-plugin-condor-update package contains the modules necessary to advertise the payload certificate of the last glexec invocation in the pilot's ClassAd.  The concept is simple - the implementation is a bit tricky.

For a long time, Condor has had a command-line tool called condor_advertise for awhile; it allows an admin to hand-advertise updates to ads in the collector.  Unfortunately, that's not quite what we need here: we want to update the job ad in the schedd, while condor_advertise typically updates the machine ad in the collector.  Close, but no cigar.

There's a lesser-known utility called condor_chirp that we can use.  Typically, condor_chirp is used to do I/O between the schedd and the starter (for example, you can pull/push files on demand in the middle of the job), but it can also update the job's ad in the schedd.  The syntax is simple:

condor_chirp ATTR_NAME ATTR_VAL

(look at the clever things Matt does with condor_chirp).  As condor_chirp allows additional access to the schedd, the user must explicitly request it in the job ad.  If you want to try it out, you must add the following line into your submit file:

+WantIOProxy=TRUE

To work, chirp must know how to contact the starter and have access to the "magic cookie"; these are located inside the $_CONDOR_SCRATCH_DIR, as set by Condor in the initial batch process.  As the glexec plugin runs as root (glexec must be setuid root to launch a process as a different UID), we must guard against being fooled by the invoking user.
Accordingly, the plugin uses /proc to read the parentage of the process tree until it finds a process owned by root.  If this is not init, it is assumed the process is the condor_starter, and the job's $_CONDOR_SCRATCH_DIR can be deduced from the $CWD and the PID of the starter.  Since we only rely on information from root-owned processes, we can be fairly sure this is the correct scratch directory.  As a further safeguard, before invoking condor_chirp, the plugin drops privilege to that of the invoking user.  Along with the other security guarantees provided by glexec, we have confidence that we are reading the correct chirp configuration and are not allowing the invoker to increase its privileges.

Once we know how to invoke condor_chirp, the rest of the process is all downhill.  glexec internally knows the payload's DN, the payload Unix user, and does the equivalent of the following:

condor_chirp set_job_attr glexec_user "hcc"
condor_chirp set_job_attr glexec_x509userproxysubject "/DC=org/DC=cilogon/C=US/O=University of Nebraska-Lincoln/CN=Brian Bockelman A621"
condor_chirp set_job_attr glexec_time 1322761868

condor_chirp writes the data into the starter, which then updates the shadow, then the schedd (some of the gory details are covered in the Condor wiki).

The diagram below illustrates the data flow:


Putting this into Play
If you really want to get messy, you can check out the source code from Subversion at:
svn://t2.unl.edu/brian/lcmaps-plugins-condor-update
(web view)

The current version of the plugin is 0.0.2.  It's available in Koji, or via yum in the osg-development repository:

yum install --enablerepo=osg-development lcmaps-plugins-condor-update

(you must already have the osg-release RPM installed and glexec otherwise configured).

After installing it, you need to update the /etc/lcmaps.db configuration file on the worker node to invoke the condor-update module.  In the top half, I add:

condor_updates = "lcmaps_condor_update.mod"

Then, I add condor-update to the glexec policy:

glexec:

verifyproxy -> gumsclient
gumsclient -> condor_updates
condor_updates -> tracking

Note we use the "tracking" module locally; most sites will use the "glexec-tracking" module.  Pick the appropriate one.

Finally, you need to turn on the I/O proxy in the Condor submit file.  We do this by editing condor.pm  (for RPMs, located in /usr/lib/perl5/vendor_perl/5.8.8/Globus/GRAM/JobManager/condor.pm).  We add the following line into the submit routine, right before queue is added to the script file:

print SCRIPT_FILE "+WantIOProxy=TRUE\n";

All new incoming jobs will get this attribute; any glexec invocations they do will be reflected at the CE!

GUMS and Worker Node Certificates
To map a certificate to a Unix user, glexec calls out to the GUMS server using XACML with a grid-interoperable profile.  In the XACML callout, GUMS is given the payload's DN and VOMS attributes.  The same library (LCMAPS/SCAS-client) and protocol can also make callouts directly to SCAS, more commonly used in Europe.

GUMS is a powerful and flexible authorization tool; one feature is that it allows different mappings based on the originating hostname.  For example, if desired, my certificate could map to user hcc at red.unl.edu but map to cmsprod at ff-grid.unl.edu.  To prevent "just anyone" from probing the GUMS server, GUMS requires the client to present X509 a certificate (in this case, the hostcert); it takes the hostname from the client's certificate.

This has the unfortunate side-effect of requiring a host certificate on every node that invokes GUMS; OK for the CE (100 in the OSG), but not for glexec on the worker nodes (thousands on the OSG).

When glexec is invoked in EGI, SCAS is invoked using the pilot certificate for HTTPS and information about the payload certificate in the XACML callout; this requires no worker node host certificate.

To replicate how glexec works in EGI, we had to develop a small patch to GUMS.  When the pilot certificate is used for authentication, the pilot's DN is recorded to the logs (so we know who is invoking GUMS), but the host name is self-reported in the XACML callout.  As the authentication is still performed, we believe this relaxing of the security model is acceptable.

A patched, working version of GUMS can be found in Koji and is available in the osg-development repository.  It will still be a few months before the RPM-based GUMS install is fully documented and released, however.

Once installed, two changes need to be made at the server:

  • Do all hostname mappings based on "DN" in the web interface, not the "CN".
  • Any group of users (for example, /cms/Role=pilot) that want to invoke GUMS must have "read all" access, not just "read self".
Further, /etc/lcmaps.db needs to be changed to remove the following lines from the gumsclient module:

"-cert   /etc/grid-security/hostcert.pem"
"-key    /etc/grid-security/hostkey.pem"
"--cert-owner root"

This will be all automated going forward - but all should help remove some of the pain in deploying glexec!

1 comment:

Note: Only a member of this blog may post a comment.