Hello everybody,
I've got an HPC cluster on a private network without access to our LDAP servers for reasons I don't have any influence on at the moment. Users connect to special nodes called submit nodes to submit (eh!) jobs on the cluster. Those nodes have access to the public facing network (hence our LDAP servers) and the cluster private network.
At the moment, /etc/passwd /etc/group and /etc/shadow are simply dumped on all cluster nodes. I'd like to move away from this setup.
How to update the submit nodes to use sssd with an ldap auth_provider should not cause any trouble. I'm concerned about the nodes accessible on the private network.
I could configure submit nodes as ldap slaves, but there are security aspects in that setup I'd like to avoid. My question is quite simple : is there a way to leverage the "sssdified" submit nodes on other nodes using some kind of relay/proxy ?
Any suggestion is welcome !
Jean-Baptiste
On 07/16/2014 05:44 AM, Jean-Baptiste Denis wrote:
Hello everybody,
I've got an HPC cluster on a private network without access to our LDAP servers for reasons I don't have any influence on at the moment. Users connect to special nodes called submit nodes to submit (eh!) jobs on the cluster. Those nodes have access to the public facing network (hence our LDAP servers) and the cluster private network.
At the moment, /etc/passwd /etc/group and /etc/shadow are simply dumped on all cluster nodes. I'd like to move away from this setup.
How to update the submit nodes to use sssd with an ldap auth_provider should not cause any trouble. I'm concerned about the nodes accessible on the private network.
I could configure submit nodes as ldap slaves, but there are security aspects in that setup I'd like to avoid. My question is quite simple : is there a way to leverage the "sssdified" submit nodes on other nodes using some kind of relay/proxy ?
Any suggestion is welcome !
Right now, no. And we do not have something like this in plans. The simplest solution is to put one of the LDAP servers into the cluster. If you can't do that then you are stuck with what you have now.
Potentially what you want is to be able to generate SSSD cache db on one system and copy it around. There is no such functionality and the problem with building one is creating password hashes in such database in bulk (requires passwords in clear which is a nonstarter). When users log in one by one passwords can be captured and hashed for further use. It is hard to do in bulk.
May be what you can do is make users log into the gateway node and then once a while copy its sssh caches to other nodes in the cluster but SSSD on those nodes would be outdated for that period of time. I do not know how usable it is. A new user would have to wait for this period after he authenticated and before he actually can submit a job. May be you already have a mechanism to queue these things. May be you can somehow detect that user is new and queue the SSSD cache update together with his actual job.
Jean-Baptiste
sssd-users mailing list sssd-users@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-users
On Wed, 16 Jul 2014, Dmitri Pal wrote:
Potentially what you want is to be able to generate SSSD cache db on one system and copy it around. There is no such functionality and the problem with building one is creating password hashes in such database in bulk (requires passwords in clear which is a nonstarter). When users log in one by one passwords can be captured and hashed for further use. It is hard to do in bulk.
I'd argue that there's normally no need to do internal authentication within an HPC cluster, user information alone is sufficient. Internally, you can rely on the integrity of the cluster to get by with trusted auth between the nodes, and HostbasedAuthentication or similar.
May be what you can do is make users log into the gateway node and then once a while copy its sssh caches to other nodes in the cluster but SSSD on those nodes would be outdated for that period of time. I do not know how usable it is. A new user would have to wait for this period after he authenticated and before he actually can submit a job. May be you already have a mechanism to queue these things. May be you can somehow detect that user is new and queue the SSSD cache update together with his actual job.
I'm not clear what benefit you get from running SSSD internally, vs a cluster local NIS/LDAP fed data from an SSSD front node.
jh
On 16 Jul 2014, at 11:44, Jean-Baptiste Denis jbdenis@pasteur.fr wrote:
I could configure submit nodes as ldap slaves, but there are security aspects in that setup I'd like to avoid. My question is quite simple : is there a way to leverage the "sssdified" submit nodes on other nodes using some kind of relay/proxy ?
Would a readonly replica mitigate your security concern?
Dimitry,
Right now, no. And we do not have something like this in plans. The simplest solution is to put one of the LDAP servers into the cluster. If you can't do that then you are stuck with what you have now.
OK.
Potentially what you want is to be able to generate SSSD cache db on one system and copy it around. There is no such functionality and the problem with building one is creating password hashes in such database in bulk (requires passwords in clear which is a nonstarter). When users log in one by one passwords can be captured and hashed for further use. It is hard to do in bulk.
I've thought of that, but although I will be using SSSD, it looks quite tricky and less robust than simply copying /etc files around.
Jakub,
Would a readonly replica mitigate your security concern?
Not entirely. And it would take time to validate this kind of setup in my situation.
I think I've got all the elements now to make an educated choice, that's all I wanted. Thank you everybody for your answers.
Jean-Baptiste
sssd-users@lists.fedorahosted.org