I did an IPA migration from CentOS 7 machines to OL 8.1 following the procedure as documented in https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/htm... .
Today I found out that only four of my eight IPA servers do resolve AD users (tested with id <AD_user> on every IPA server). The setup procedure did not differ. (except for machine no. 1 which is the CA renewal master)
AD users are resolved on machines 1, 5, 6 and 8. Machine 2, 3, 4 and 7 do not resolve AD users.
Only machines 1, 2, 5 and 6 should have become trust controllers but when I had a look at the server roles in the WebGUI I saw the trust controller role on every machine. Is it possible to remove the trust controller role on 4 of the 8 machines?
Where should I take a closer look?
Cheers, Ronald
On 05.06.20 16:24, Ronald Wimmer via FreeIPA-users wrote:
I did an IPA migration from CentOS 7 machines to OL 8.1 following the procedure as documented in https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/htm... .
Today I found out that only four of my eight IPA servers do resolve AD users (tested with id <AD_user> on every IPA server). The setup procedure did not differ. (except for machine no. 1 which is the CA renewal master)
AD users are resolved on machines 1, 5, 6 and 8. Machine 2, 3, 4 and 7 do not resolve AD users.
When upgrading we could neither keep hostnames nor IP addresses. Might this explain the behaviour above? (could the working machines have IPs of former trust controllers?)
Cheers, Ronald
On 05.06.20 17:33, Ronald Wimmer via FreeIPA-users wrote:
On 05.06.20 16:24, Ronald Wimmer via FreeIPA-users wrote:
I did an IPA migration from CentOS 7 machines to OL 8.1 following the procedure as documented in https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/htm... .
Today I found out that only four of my eight IPA servers do resolve AD users (tested with id <AD_user> on every IPA server). The setup procedure did not differ. (except for machine no. 1 which is the CA renewal master)
AD users are resolved on machines 1, 5, 6 and 8. Machine 2, 3, 4 and 7 do not resolve AD users.
When upgrading we could neither keep hostnames nor IP addresses. Might this explain the behaviour above? (could the working machines have IPs of former trust controllers?)
I think I was panicking too early. Because the sssd-db-cache was mounted in RAM I rebooted the IPA servers sequentially and voilà the problem disappeared.
Is there any means of checking the IPA installation? I will try ipa-healthcheck today.
Cheers, Ronald
Ronald Wimmer via FreeIPA-users wrote:
On 05.06.20 17:33, Ronald Wimmer via FreeIPA-users wrote:
On 05.06.20 16:24, Ronald Wimmer via FreeIPA-users wrote:
I did an IPA migration from CentOS 7 machines to OL 8.1 following the procedure as documented in https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/htm... .
Today I found out that only four of my eight IPA servers do resolve AD users (tested with id <AD_user> on every IPA server). The setup procedure did not differ. (except for machine no. 1 which is the CA renewal master)
AD users are resolved on machines 1, 5, 6 and 8. Machine 2, 3, 4 and 7 do not resolve AD users.
When upgrading we could neither keep hostnames nor IP addresses. Might this explain the behaviour above? (could the working machines have IPs of former trust controllers?)
I think I was panicking too early. Because the sssd-db-cache was mounted in RAM I rebooted the IPA servers sequentially and voilà the problem disappeared.
Is there any means of checking the IPA installation? I will try ipa-healthcheck today.
That's the way to check the installation.
rob
On 08.06.20 19:24, Rob Crittenden via FreeIPA-users wrote:
Ronald Wimmer via FreeIPA-users wrote:
On 05.06.20 17:33, Ronald Wimmer via FreeIPA-users wrote:
On 05.06.20 16:24, Ronald Wimmer via FreeIPA-users wrote:
I did an IPA migration from CentOS 7 machines to OL 8.1 following the procedure as documented in https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/htm... .
Today I found out that only four of my eight IPA servers do resolve AD users (tested with id <AD_user> on every IPA server). The setup procedure did not differ. (except for machine no. 1 which is the CA renewal master)
AD users are resolved on machines 1, 5, 6 and 8. Machine 2, 3, 4 and 7 do not resolve AD users.
When upgrading we could neither keep hostnames nor IP addresses. Might this explain the behaviour above? (could the working machines have IPs of former trust controllers?)
I think I was panicking too early. Because the sssd-db-cache was mounted in RAM I rebooted the IPA servers sequentially and voilà the problem disappeared.
Is there any means of checking the IPA installation? I will try ipa-healthcheck today.
That's the way to check the installation.
After doing a dnf install ipa-healthcheck the ipa-healthcheck command is supposed to work? If yes, this is not the case in Oracle Linux 8.1.
Cheers, Ronald
Ronald Wimmer via FreeIPA-users wrote:
On 08.06.20 19:24, Rob Crittenden via FreeIPA-users wrote:
Ronald Wimmer via FreeIPA-users wrote:
On 05.06.20 17:33, Ronald Wimmer via FreeIPA-users wrote:
On 05.06.20 16:24, Ronald Wimmer via FreeIPA-users wrote:
I did an IPA migration from CentOS 7 machines to OL 8.1 following the procedure as documented in https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/htm...
.
Today I found out that only four of my eight IPA servers do resolve AD users (tested with id <AD_user> on every IPA server). The setup procedure did not differ. (except for machine no. 1 which is the CA renewal master)
AD users are resolved on machines 1, 5, 6 and 8. Machine 2, 3, 4 and 7 do not resolve AD users.
When upgrading we could neither keep hostnames nor IP addresses. Might this explain the behaviour above? (could the working machines have IPs of former trust controllers?)
I think I was panicking too early. Because the sssd-db-cache was mounted in RAM I rebooted the IPA servers sequentially and voilà the problem disappeared.
Is there any means of checking the IPA installation? I will try ipa-healthcheck today.
That's the way to check the installation.
After doing a dnf install ipa-healthcheck the ipa-healthcheck command is supposed to work? If yes, this is not the case in Oracle Linux 8.1.
That's quite a vague statement.
rob
On 12.06.20 14:45, Rob Crittenden wrote:
Ronald Wimmer via FreeIPA-users wrote:
On 08.06.20 19:24, Rob Crittenden via FreeIPA-users wrote:
Ronald Wimmer via FreeIPA-users wrote:
On 05.06.20 17:33, Ronald Wimmer via FreeIPA-users wrote:
On 05.06.20 16:24, Ronald Wimmer via FreeIPA-users wrote:
I did an IPA migration from CentOS 7 machines to OL 8.1 following the procedure as documented in https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/htm...
.
Today I found out that only four of my eight IPA servers do resolve AD users (tested with id <AD_user> on every IPA server). The setup procedure did not differ. (except for machine no. 1 which is the CA renewal master)
AD users are resolved on machines 1, 5, 6 and 8. Machine 2, 3, 4 and 7 do not resolve AD users.
When upgrading we could neither keep hostnames nor IP addresses. Might this explain the behaviour above? (could the working machines have IPs of former trust controllers?)
I think I was panicking too early. Because the sssd-db-cache was mounted in RAM I rebooted the IPA servers sequentially and voilà the problem disappeared.
Is there any means of checking the IPA installation? I will try ipa-healthcheck today.
That's the way to check the installation.
After doing a dnf install ipa-healthcheck the ipa-healthcheck command is supposed to work? If yes, this is not the case in Oracle Linux 8.1.
That's quite a vague statement.
You're right. I should have provided more info:
[root@pipa02 mailto:root@pipa02 ~]# yum install ipa-healthcheck This system is receiving updates from Spacewalk server. Package ipa-healthcheck-core-0.4-4.module+el8.2.0+5596+233bd6ae.noarch is already installed. Dependencies resolved. Nothing to do. Complete! [root@pipa02 mailto:root@pipa02 ~]# ipa-healthcheck -bash: ipa-healthcheck: command not found [root@pipa02 mailto:root@pipa02 ~]# which ipa-healthcheck /usr/bin/which: no ipa-healthcheck in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin) [root@pipa02 mailto:root@pipa02 ~]# find / -name "*ipa-healthcheck*" /usr/share/licenses/ipa-healthcheck-core /usr/share/doc/ipa-healthcheck-core
Ronald Wimmer wrote:
On 12.06.20 14:45, Rob Crittenden wrote:
Ronald Wimmer via FreeIPA-users wrote:
On 08.06.20 19:24, Rob Crittenden via FreeIPA-users wrote:
Ronald Wimmer via FreeIPA-users wrote:
On 05.06.20 17:33, Ronald Wimmer via FreeIPA-users wrote:
On 05.06.20 16:24, Ronald Wimmer via FreeIPA-users wrote: > I did an IPA migration from CentOS 7 machines to OL 8.1 following the > procedure as documented in > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/htm... > > . > > Today I found out that only four of my eight IPA servers do resolve > AD users (tested with id <AD_user> on every IPA server). The setup > procedure did not differ. (except for machine no. 1 which is the CA > renewal master) > > AD users are resolved on machines 1, 5, 6 and 8. Machine 2, 3, 4 and > 7 do not resolve AD users. When upgrading we could neither keep hostnames nor IP addresses. Might this explain the behaviour above? (could the working machines have IPs of former trust controllers?)
I think I was panicking too early. Because the sssd-db-cache was mounted in RAM I rebooted the IPA servers sequentially and voilà the problem disappeared.
Is there any means of checking the IPA installation? I will try ipa-healthcheck today.
That's the way to check the installation.
After doing a dnf install ipa-healthcheck the ipa-healthcheck command is supposed to work? If yes, this is not the case in Oracle Linux 8.1.
That's quite a vague statement.
You're right. I should have provided more info:
[root@pipa02 mailto:root@pipa02 ~]# yum install ipa-healthcheck This system is receiving updates from Spacewalk server. Package ipa-healthcheck-core-0.4-4.module+el8.2.0+5596+233bd6ae.noarch is already installed. Dependencies resolved. Nothing to do. Complete! [root@pipa02 mailto:root@pipa02 ~]# ipa-healthcheck -bash: ipa-healthcheck: command not found [root@pipa02 mailto:root@pipa02 ~]# which ipa-healthcheck /usr/bin/which: no ipa-healthcheck in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin) [root@pipa02 mailto:root@pipa02 ~]# find / -name "*ipa-healthcheck*" /usr/share/licenses/ipa-healthcheck-core /usr/share/doc/ipa-healthcheck-core
There should be two packages, ipa-healthcheck and ipa-healthcheck-core. I'm not sure why dnf is considering ipa-healthcheck-core to satisfy the requirement for ipa-healthcheck.
You need ipa-healthcheck-0.4-4.module+el8.2.0+5596+233bd6ae.noarch
I'd try installing that directly using the full n-v-r. It may be that your satellite server doesn't have the package.
rob
On 15.06.20 16:57, Rob Crittenden via FreeIPA-users wrote:
Ronald Wimmer wrote:
On 12.06.20 14:45, Rob Crittenden wrote:
Ronald Wimmer via FreeIPA-users wrote:
On 08.06.20 19:24, Rob Crittenden via FreeIPA-users wrote:
Ronald Wimmer via FreeIPA-users wrote:
On 05.06.20 17:33, Ronald Wimmer via FreeIPA-users wrote: > On 05.06.20 16:24, Ronald Wimmer via FreeIPA-users wrote: >> I did an IPA migration from CentOS 7 machines to OL 8.1 following the >> procedure as documented in >> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/htm... >> >> . >> >> Today I found out that only four of my eight IPA servers do resolve >> AD users (tested with id <AD_user> on every IPA server). The setup >> procedure did not differ. (except for machine no. 1 which is the CA >> renewal master) >> >> AD users are resolved on machines 1, 5, 6 and 8. Machine 2, 3, 4 and >> 7 do not resolve AD users. > When upgrading we could neither keep hostnames nor IP addresses. Might > this explain the behaviour above? (could the working machines have IPs > of former trust controllers?) I think I was panicking too early. Because the sssd-db-cache was mounted in RAM I rebooted the IPA servers sequentially and voilà the problem disappeared.
Is there any means of checking the IPA installation? I will try ipa-healthcheck today.
That's the way to check the installation.
After doing a dnf install ipa-healthcheck the ipa-healthcheck command is supposed to work? If yes, this is not the case in Oracle Linux 8.1.
That's quite a vague statement.
You're right. I should have provided more info:
[root@pipa02 mailto:root@pipa02 ~]# yum install ipa-healthcheck This system is receiving updates from Spacewalk server. Package ipa-healthcheck-core-0.4-4.module+el8.2.0+5596+233bd6ae.noarch is already installed. Dependencies resolved. Nothing to do. Complete! [root@pipa02 mailto:root@pipa02 ~]# ipa-healthcheck -bash: ipa-healthcheck: command not found [root@pipa02 mailto:root@pipa02 ~]# which ipa-healthcheck /usr/bin/which: no ipa-healthcheck in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin) [root@pipa02 mailto:root@pipa02 ~]# find / -name "*ipa-healthcheck*" /usr/share/licenses/ipa-healthcheck-core /usr/share/doc/ipa-healthcheck-core
There should be two packages, ipa-healthcheck and ipa-healthcheck-core. I'm not sure why dnf is considering ipa-healthcheck-core to satisfy the requirement for ipa-healthcheck.
You need ipa-healthcheck-0.4-4.module+el8.2.0+5596+233bd6ae.noarch
I'd try installing that directly using the full n-v-r. It may be that your satellite server doesn't have the package.
dnf install ipa-healthcheck.noarch worked.
Cheers, Ronald
Hi Rob,
ipa-healthcheck revealed several errors. I do not want to discuss all of them in public because I do not want do disclose the domain/subdomain names of our AD. (If you think the topic is worth to be discussed on the mailing list, I will obfuscate them before posting.)
I would highly appreciate if you could take a quick look and tell me how severe they are and what I can possibly do to fix them. I do not care about KRA because we did not use the feature at this point in time. KRA could be set up from scratch again - if possible. The replication conflicts sound much more troubeling...
Cheers, Ronald
{ "source": "pki.server.healthcheck.meta.csconfig", "check": "DogtagCertsConfigCheck", "result": "ERROR", "uuid": "29b240d3-a221-4bd5-a3d9-bae309ed33a7", "when": "20200616210039Z", "duration": "0.197320", "kw": { "key": "kra_transport", "nickname": "transportCert cert-pki-kra", "directive": "kra.transport.cert", "configfile": "/var/lib/pki/pki-tomcat/kra/conf/CS.cfg", "msg": "Certificate 'transportCert cert-pki-kra' does not match the value of kra.transport.cert in /var/lib/pki/pki-tomcat/kra/conf/CS.cfg" } -- { "source": "pki.server.healthcheck.meta.csconfig", "check": "DogtagCertsConfigCheck", "result": "ERROR", "uuid": "c946f181-3745-499e-ab9c-289a4ffd36e9", "when": "20200616210039Z", "duration": "0.228105", "kw": { "key": "kra_storage", "nickname": "storageCert cert-pki-kra", "directive": "kra.storage.cert", "configfile": "/var/lib/pki/pki-tomcat/kra/conf/CS.cfg", "msg": "Certificate 'storageCert cert-pki-kra' does not match the value of kra.storage.cert in /var/lib/pki/pki-tomcat/kra/conf/CS.cfg" } -- { "source": "pki.server.healthcheck.meta.csconfig", "check": "DogtagCertsConfigCheck", "result": "ERROR", "uuid": "0e59c252-53d8-449e-bc51-96b59e1a8acc", "when": "20200616210039Z", "duration": "0.260174", "kw": { "key": "kra_audit_signing", "nickname": "auditSigningCert cert-pki-kra", "directive": "kra.audit_signing.cert", "configfile": "/var/lib/pki/pki-tomcat/kra/conf/CS.cfg", "msg": "Certificate 'auditSigningCert cert-pki-kra' does not match the value of kra.audit_signing.cert in /var/lib/pki/pki-tomcat/kra/conf/CS.cfg" } -- { "source": "ipahealthcheck.dogtag.ca", "check": "DogtagCertsConfigCheck", "result": "ERROR", "uuid": "01b18546-b473-40fb-9923-bfb23f152038", "when": "20200616210039Z", "duration": "0.260025", "kw": { "key": "transportCert cert-pki-kra", "directive": "ca.connector.KRA.transportCert", "configfile": "/var/lib/pki/pki-tomcat/conf/ca/CS.cfg", "msg": "Certificate 'transportCert cert-pki-kra' does not match the value of ca.connector.KRA.transportCert in /var/lib/pki/pki-tomcat/conf/ca/CS.cfg" } }, -- { "source": "ipahealthcheck.ds.replication", "check": "ReplicationConflictCheck", "result": "ERROR", "uuid": "3dca4913-3a30-4bff-8326-3c51b0aeda8c", "when": "20200616210039Z", "duration": "0.003225", "kw": { "key": "cn=certmap+nsuniqueid=46562a35-994311e7-bcd9e321-1436c40f,dc=linux,dc=mydomain,dc=at", "glue": false, "conflict": "namingConflict cn=certmap,dc=linux,dc=mydomain,dc=at", "msg": "Replication conflict" } }, { "source": "ipahealthcheck.ds.replication", "check": "ReplicationConflictCheck", "result": "ERROR", "uuid": "58623e97-e913-433a-9793-6bb233afdcc9", "when": "20200616210039Z", "duration": "0.003316", "kw": { "key": "cn=Certificate Identity Mapping Administrators+nsuniqueid=46562a39-994311e7-bcd9e321-1436c40f,cn=privileges,cn=pbac,dc=linux,dc=mydomain,dc=at", "glue": false, "conflict": "namingConflict cn=Certificate Identity Mapping Administrators,cn=privileges,cn=pbac,dc=linux,dc=mydomain,dc=at", "msg": "Replication conflict" } }, { "source": "ipahealthcheck.ds.replication", "check": "ReplicationConflictCheck", "result": "ERROR", "uuid": "7a76e510-8b3a-41f6-b329-fc474ca6202f", "when": "20200616210039Z", "duration": "0.003397", "kw": { "key": "cn=System: Modify Certmap Configuration+nsuniqueid=46562a41-994311e7-bcd9e321-1436c40f,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "glue": false, "conflict": "namingConflict cn=System: Modify Certmap Configuration,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "msg": "Replication conflict" } }, { "source": "ipahealthcheck.ds.replication", "check": "ReplicationConflictCheck", "result": "ERROR", "uuid": "e7588c0d-52d8-44d2-beec-4e3c31be3f4b", "when": "20200616210039Z", "duration": "0.003475", "kw": { "key": "cn=System: Read Certmap Configuration+nsuniqueid=46562a45-994311e7-bcd9e321-1436c40f,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "glue": false, "conflict": "namingConflict cn=System: Read Certmap Configuration,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "msg": "Replication conflict" } }, { "source": "ipahealthcheck.ds.replication", "check": "ReplicationConflictCheck", "result": "ERROR", "uuid": "f18f8d2e-b304-4345-8625-55e62ea0a6ca", "when": "20200616210039Z", "duration": "0.003552", "kw": { "key": "cn=System: Add Certmap Rules+nsuniqueid=46562a48-994311e7-bcd9e321-1436c40f,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "glue": false, "conflict": "namingConflict cn=System: Add Certmap Rules,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "msg": "Replication conflict" } }, { "source": "ipahealthcheck.ds.replication", "check": "ReplicationConflictCheck", "result": "ERROR", "uuid": "6509b40b-af09-4813-bd4f-330d8bc2ad07", "when": "20200616210039Z", "duration": "0.003626", "kw": { "key": "cn=System: Delete Certmap Rules+nsuniqueid=46562a4c-994311e7-bcd9e321-1436c40f,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "glue": false, "conflict": "namingConflict cn=System: Delete Certmap Rules,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "msg": "Replication conflict" } }, { "source": "ipahealthcheck.ds.replication", "check": "ReplicationConflictCheck", "result": "ERROR", "uuid": "8d2acff4-40ce-4275-ae63-a7a98be207c2", "when": "20200616210039Z", "duration": "0.003701", "kw": { "key": "cn=System: Modify Certmap Rules+nsuniqueid=46562a50-994311e7-bcd9e321-1436c40f,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "glue": false, "conflict": "namingConflict cn=System: Modify Certmap Rules,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "msg": "Replication conflict" } }, { "source": "ipahealthcheck.ds.replication", "check": "ReplicationConflictCheck", "result": "ERROR", "uuid": "0cf56b18-9f8f-47d1-b086-0ba928709bfc", "when": "20200616210039Z", "duration": "0.003794", "kw": { "key": "cn=System: Read Certmap Rules+nsuniqueid=46562a54-994311e7-bcd9e321-1436c40f,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "glue": false, "conflict": "namingConflict cn=System: Read Certmap Rules,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "msg": "Replication conflict" } }, { "source": "ipahealthcheck.ds.replication", "check": "ReplicationConflictCheck", "result": "ERROR", "uuid": "6eea376a-1e81-4c31-98b4-fd3d72695951", "when": "20200616210039Z", "duration": "0.003873", "kw": { "key": "cn=System: Modify External Group Membership+nsuniqueid=46562a5d-994311e7-bcd9e321-1436c40f,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "glue": false, "conflict": "namingConflict cn=System: Modify External Group Membership,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "msg": "Replication conflict" } }, { "source": "ipahealthcheck.ds.replication", "check": "ReplicationConflictCheck", "result": "ERROR", "uuid": "a4c35d33-31ca-452a-b13c-02ffd6d8eea3", "when": "20200616210039Z", "duration": "0.003953", "kw": { "key": "cn=System: Read External Group Membership+nsuniqueid=46562a64-994311e7-bcd9e321-1436c40f,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "glue": false, "conflict": "namingConflict cn=System: Read External Group Membership,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "msg": "Replication conflict" } }, { "source": "ipahealthcheck.ds.replication", "check": "ReplicationConflictCheck", "result": "ERROR", "uuid": "22033461-49bb-4836-ab2f-0a989d046c3f", "when": "20200616210039Z", "duration": "0.004030", "kw": { "key": "cn=System: Manage User Certificate Mappings+nsuniqueid=46562a6b-994311e7-bcd9e321-1436c40f,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "glue": false, "conflict": "namingConflict cn=System: Manage User Certificate Mappings,cn=permissions,cn=pbac,dc=linux,dc=mydomain,dc=at", "msg": "Replication conflict" } }, { "source": "ipahealthcheck.ds.replication", "check": "ReplicationConflictCheck", "result": "ERROR", "uuid": "86a73d33-7c33-49b5-bf25-4d175fb45180", "when": "20200616210039Z", "duration": "0.004114", "kw": { "key": "krbPrincipalName=WELLKNOWN/ANONYMOUS@LINUX.MYDOMAIN.AT+nsuniqueid=64bc25a5-994311e7-bcd9e321-1436c40f,cn=LINUX.MYDOMAIN.AT,cn=kerberos,dc=linux,dc=mydomain,dc=at", "glue": false, "conflict": "namingConflict krbPrincipalName=WELLKNOWN/ANONYMOUS@LINUX.MYDOMAIN.AT,cn=LINUX.MYDOMAIN.AT,cn=kerberos,dc=linux,dc=mydomain,dc=at", "msg": "Replication conflict" } }, { "source": "ipahealthcheck.ds.replication", "check": "ReplicationConflictCheck", "result": "ERROR", "uuid": "98d8b851-1d33-4b0f-9b6b-c204abe9a721", "when": "20200616210039Z", "duration": "0.004185", "kw": { "key": "cn=KDCs_PKINIT_Certs+nsuniqueid=64bc259d-994311e7-bcd9e321-1436c40f,cn=certprofiles,cn=ca,dc=linux,dc=mydomain,dc=at", "glue": false, "conflict": "namingConflict cn=KDCs_PKINIT_Certs,cn=certprofiles,cn=ca,dc=linux,dc=mydomain,dc=at", "msg": "Replication conflict" } }, -- { "source": "ipahealthcheck.ipa.certs", "check": "IPACertRevocation", "result": "ERROR", "uuid": "09f846de-b933-417b-a5b8-c018c7892e61", "when": "20200616210043Z", "duration": "2.086729", "kw": { "key": "20200603161155", "msg": "Request for certificate failed, Certificate operation cannot be completed: EXCEPTION (Certificate serial number 0xffd0008 not found)" } }, { "source": "ipahealthcheck.ipa.certs", "check": "IPACertRevocation", "result": "ERROR", "uuid": "2060d0c2-b602-4487-b5d9-6320c8488464", "when": "20200616210043Z", "duration": "2.158848", "kw": { "key": "20200603161428", "msg": "Request for certificate failed, Certificate operation cannot be completed: EXCEPTION (Certificate serial number 0xffd0009 not found)" } }, { "source": "ipahealthcheck.ipa.certs", -- { "source": "ipahealthcheck.ipa.trust", "check": "IPATrustDomainsCheck", "result": "ERROR", "uuid": "589666e0-0426-4dcd-8576-15ec5e1e37e0", "when": "20200616210043Z", "duration": "0.226474", "kw": { "key": "domain-list", "sssctl": "/usr/sbin/sssctl", "sssd_domains": "mydomain.at, buero.mydomain.at, org.mydomain.at, tk.mydomain.at", "trust_domains": "mydomain.at", "msg": "{sssctl} {key} reports mismatch: sssd domains {sssd_domains} trust domains {trust_domains}" }
Hello Ronald,
Ronald Wimmer via FreeIPA-users freeipa-users@lists.fedorahosted.org writes:
I would highly appreciate if you could take a quick look and tell me how severe they are and what I can possibly do to fix them. I do not care about KRA because we did not use the feature at this point in time. KRA could be set up from scratch again - if possible. The replication conflicts sound much more troubeling...
I've also had the KRA certificate problem - it was relativly easy to fix. I've just replaced the certificates in the CS.cfg files with the certificates from LDAP (or WebUI - cut&paste was my friend).
{ "source": "pki.server.healthcheck.meta.csconfig",
...
"key": "kra_transport", "nickname": "transportCert cert-pki-kra", "directive": "kra.transport.cert", "configfile": "/var/lib/pki/pki-tomcat/kra/conf/CS.cfg", "msg": "Certificate 'transportCert cert-pki-kra' does not match
the value of kra.transport.cert in /var/lib/pki/pki-tomcat/kra/conf/CS.cfg"
Regarding the replication errors - these look mostly like standard permissions. You should follow the documentation on how to fix replication errors. I'd expect that you probably can just remove the conflict entry - but you need to check that.
Jochen
freeipa-users@lists.fedorahosted.org