All,
This is a strange one. When we exec this command under puppet control:
/usr/sbin/realm permit -R AMER.COMPANY.COM processehcprofiler@AMER.COMPANY.COM
Then sssd_be core dumps (segfault).
When we run that ‘realm permit’ command natively on the command line, it executes flawlessly. No core dump.
Naturally, our first thought was that it’s something different in the environment. So we dumped the environment under which puppet exec resource runs.
Yes, quite minimal.
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.m4a=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.oga=01;36:*.opus=01;36:*.spx=01;36:*.xspf=01;36:
LD_LIBRARY_PATH=
LANG=en_US.UTF-8
HISTCONTROL=ignoredups
HOSTNAME=austgcore25.us.company.com
S_COLORS=auto
PWD=/root
APT_LISTCHANGES_FRONTEND=none
DEBIAN_FRONTEND=noninteractive
APT_LISTBUGS_FRONTEND=none
MAIL=/var/spool/mail/root
SHELL=/bin/bash
TERM=xterm
SHLVL=2
MANPATH=:/opt/puppetlabs/puppet/share/man
PATH=/bin:/usr/bin:/sbin:/usr/sbin
HISTSIZE=1000
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/bin/env
But when we create a bash session with no environment, then add this puppet-supplied environment and run the above realm permit– all is still well.
We can reproduce this easily in puppet. Just delete our breadcrumb file (so that puppet re-executes this ‘realm permit’ command). And execute another puppet agent run.
Doing this, we obtained a core dump from sssd_be. And it points to some code in ad_id.c:
[root@austgcore25 tmp]# gdb -c core-sssd_be.15405.austgcore25.us.company.com.1563210863
GNU gdb (GDB) Red Hat Enterprise Linux 8.2-5.el8
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/.
Find the GDB manual and other documentation resources online at:
http://www.gnu.org/software/gdb/documentation/.
For help, type "help".
Type "apropos word" to search for commands related to "word".
[New LWP 15405]
Reading symbols from /usr/libexec/sssd/sssd_be...Reading symbols from /usr/lib/debug/usr/libexec/sssd/sssd_be-2.0.0-43.el8.x86_64.debug...done.
done.
warning: Ignoring non-absolute filename: <linux-vdso.so.1>
Missing separate debuginfo for linux-vdso.so.1
Try: dnf --enablerepo='*debug*' install /usr/lib/debug/.build-id/06/44254f9cbaa826db070a796046026adba58266
warning: .dynamic section for "/usr/lib64/libndr-nbt.so.0.0.1" is not at the expected address (wrong library or version mismatch?)
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/libexec/sssd/sssd_be --domain AMER.COMPANY.COM --uid 0 --gid 0 --logger=files'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007f79444f53c0 in ad_get_account_domain_search (req=req@entry=0x5557b6fd45b0) at src/providers/ad/ad_id.c:1276
1276 state->filter = sdap_combine_filters(state, state->base_filter,
(gdb)
This is in RHEL8.
What we suspect is that the shell in which puppet executes this ‘realm permit’ is not supplying something that this executable needs. If we know what it is, we can preface our puppet exec resource code snippet with the missing piece.
Spike
PS This ‘realm permit’ does seem to perform the correct actions eventually. It adds the expected user to the simple_allow_users line of the appropriate AD domain in /etc/sssd/sssd.conf file. But because it segfaults and then has to start up again, it takes a very long time to complete the puppet run. (There’s about 20 – 30 users + groups allowed; it has to segfault on each of them).
On Mon, Jul 15, 2019 at 12:50:03PM -0500, Spike White wrote:
All,
This is a strange one. When we exec this command under puppet control:
/usr/sbin/realm permit -R AMER.COMPANY.COM processehcprofiler@AMER.COMPANY.COM
Then sssd_be core dumps (segfault).
Anytime sssd_be segfaults, it is a bug. Could you file a bug or a support case (since you mentioned RHEL). In case you file a support case, feel free to send me the number so we can follow up.
It would be nice to attach the core and logs etc to the case or the bug, because our tests make use of realm permit and we have not hit the bug so far.
When we run that ‘realm permit’ command natively on the command line, it executes flawlessly. No core dump.
Naturally, our first thought was that it’s something different in the environment. So we dumped the environment under which puppet exec resource runs.
Yes, quite minimal.
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.m4a=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.oga=01;36:*.opus=01;36:*.spx=01;36:*.xspf=01;36:
LD_LIBRARY_PATH=
LANG=en_US.UTF-8
HISTCONTROL=ignoredups
HOSTNAME=austgcore25.us.company.com
S_COLORS=auto
PWD=/root
APT_LISTCHANGES_FRONTEND=none
DEBIAN_FRONTEND=noninteractive
APT_LISTBUGS_FRONTEND=none
MAIL=/var/spool/mail/root
SHELL=/bin/bash
TERM=xterm
SHLVL=2
MANPATH=:/opt/puppetlabs/puppet/share/man
PATH=/bin:/usr/bin:/sbin:/usr/sbin
HISTSIZE=1000
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/bin/env
But when we create a bash session with no environment, then add this puppet-supplied environment and run the above realm permit– all is still well.
We can reproduce this easily in puppet. Just delete our breadcrumb file (so that puppet re-executes this ‘realm permit’ command). And execute another puppet agent run.
Doing this, we obtained a core dump from sssd_be. And it points to some code in ad_id.c:
[root@austgcore25 tmp]# gdb -c core-sssd_be.15405.austgcore25.us.company.com.1563210863
GNU gdb (GDB) Red Hat Enterprise Linux 8.2-5.el8
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".
[New LWP 15405]
Reading symbols from /usr/libexec/sssd/sssd_be...Reading symbols from /usr/lib/debug/usr/libexec/sssd/sssd_be-2.0.0-43.el8.x86_64.debug...done.
done.
warning: Ignoring non-absolute filename: <linux-vdso.so.1>
Missing separate debuginfo for linux-vdso.so.1
Try: dnf --enablerepo='*debug*' install /usr/lib/debug/.build-id/06/44254f9cbaa826db070a796046026adba58266
warning: .dynamic section for "/usr/lib64/libndr-nbt.so.0.0.1" is not at the expected address (wrong library or version mismatch?)
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/libexec/sssd/sssd_be --domain AMER.COMPANY.COM --uid 0 --gid 0 --logger=files'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007f79444f53c0 in ad_get_account_domain_search (req=req@entry=0x5557b6fd45b0) at src/providers/ad/ad_id.c:1276
1276 state->filter = sdap_combine_filters(state, state->base_filter,
(gdb)
This is in RHEL8.
What we suspect is that the shell in which puppet executes this ‘realm permit’ is not supplying something that this executable needs. If we know what it is, we can preface our puppet exec resource code snippet with the missing piece.
Spike
PS This ‘realm permit’ does seem to perform the correct actions eventually. It adds the expected user to the simple_allow_users line of the appropriate AD domain in /etc/sssd/sssd.conf file. But because it segfaults and then has to start up again, it takes a very long time to complete the puppet run. (There’s about 20 – 30 users + groups allowed; it has to segfault on each of them).
sssd-users mailing list -- sssd-users@lists.fedorahosted.org To unsubscribe send an email to sssd-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.o...
The following case has been opened with RHEL support on this. It was opened this morning:
(SEV 4) Case #02427449 ('realm permit group@DOMAIN' causing background process sssd_be to segfault.)
Spike
On Mon, Jul 15, 2019 at 4:01 PM Jakub Hrozek jhrozek@redhat.com wrote:
On Mon, Jul 15, 2019 at 12:50:03PM -0500, Spike White wrote:
All,
This is a strange one. When we exec this command under puppet control:
/usr/sbin/realm permit -R AMER.COMPANY.COM processehcprofiler@AMER.COMPANY.COM
Then sssd_be core dumps (segfault).
Anytime sssd_be segfaults, it is a bug. Could you file a bug or a support case (since you mentioned RHEL). In case you file a support case, feel free to send me the number so we can follow up.
It would be nice to attach the core and logs etc to the case or the bug, because our tests make use of realm permit and we have not hit the bug so far.
When we run that ‘realm permit’ command natively on the command line, it executes flawlessly. No core dump.
Naturally, our first thought was that it’s something different in the environment. So we dumped the environment under which puppet exec
resource
runs.
Yes, quite minimal.
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.m4a=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.oga=01;36:*.opus=01;36:*.spx=01;36:*.xspf=01;36:
LD_LIBRARY_PATH=
LANG=en_US.UTF-8
HISTCONTROL=ignoredups
HOSTNAME=austgcore25.us.company.com
S_COLORS=auto
PWD=/root
APT_LISTCHANGES_FRONTEND=none
DEBIAN_FRONTEND=noninteractive
APT_LISTBUGS_FRONTEND=none
MAIL=/var/spool/mail/root
SHELL=/bin/bash
TERM=xterm
SHLVL=2
MANPATH=:/opt/puppetlabs/puppet/share/man
PATH=/bin:/usr/bin:/sbin:/usr/sbin
HISTSIZE=1000
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/bin/env
But when we create a bash session with no environment, then add this puppet-supplied environment and run the above realm permit– all is still well.
We can reproduce this easily in puppet. Just delete our breadcrumb file (so that puppet re-executes this ‘realm permit’ command). And execute another puppet agent run.
Doing this, we obtained a core dump from sssd_be. And it points to some code in ad_id.c:
[root@austgcore25 tmp]# gdb -c core-sssd_be.15405.austgcore25.us.company.com.1563210863
GNU gdb (GDB) Red Hat Enterprise Linux 8.2-5.el8
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <
http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".
[New LWP 15405]
Reading symbols from /usr/libexec/sssd/sssd_be...Reading symbols from /usr/lib/debug/usr/libexec/sssd/sssd_be-2.0.0-43.el8.x86_64.debug...done.
done.
warning: Ignoring non-absolute filename: <linux-vdso.so.1>
Missing separate debuginfo for linux-vdso.so.1
Try: dnf --enablerepo='*debug*' install /usr/lib/debug/.build-id/06/44254f9cbaa826db070a796046026adba58266
warning: .dynamic section for "/usr/lib64/libndr-nbt.so.0.0.1" is not at the expected address (wrong library or version mismatch?)
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/libexec/sssd/sssd_be --domain
AMER.COMPANY.COM
--uid 0 --gid 0 --logger=files'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007f79444f53c0 in ad_get_account_domain_search (req=req@entry=0x5557b6fd45b0) at src/providers/ad/ad_id.c:1276
1276 state->filter = sdap_combine_filters(state,
state->base_filter,
(gdb)
This is in RHEL8.
What we suspect is that the shell in which puppet executes this ‘realm permit’ is not supplying something that this executable needs. If we
know
what it is, we can preface our puppet exec resource code snippet with the missing piece.
Spike
PS This ‘realm permit’ does seem to perform the correct actions eventually. It adds the expected user to the simple_allow_users line of the appropriate AD domain in /etc/sssd/sssd.conf file. But because it segfaults and then has to start up again, it takes a very long time to complete the puppet run. (There’s about 20 – 30 users + groups allowed;
it
has to segfault on each of them).
sssd-users mailing list -- sssd-users@lists.fedorahosted.org To unsubscribe send an email to sssd-users-leave@lists.fedorahosted.org Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives:
https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.o... _______________________________________________ sssd-users mailing list -- sssd-users@lists.fedorahosted.org To unsubscribe send an email to sssd-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.o...
On Tue, Jul 16, 2019 at 12:32:29PM -0500, Spike White wrote:
The following case has been opened with RHEL support on this. It was opened this morning:
(SEV 4) Case #02427449 ('realm permit group@DOMAIN' causing background process sssd_be to segfault.)
Thank you, comment added. I hope a BZ would be created soon.
Here is the bugzilla link to the ticket:
https://bugzilla.redhat.com/show_bug.cgi?id=1738375
So it appears a BZ has been created.
Spike
On Tue, Jul 16, 2019 at 3:32 PM Jakub Hrozek jhrozek@redhat.com wrote:
On Tue, Jul 16, 2019 at 12:32:29PM -0500, Spike White wrote:
The following case has been opened with RHEL support on this. It was opened this morning:
(SEV 4) Case #02427449 ('realm permit group@DOMAIN' causing background process sssd_be to segfault.)
Thank you, comment added. I hope a BZ would be created soon. _______________________________________________ sssd-users mailing list -- sssd-users@lists.fedorahosted.org To unsubscribe send an email to sssd-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.o...
All,
This was a case where 'realm permit' of a user was causing a back-end sssd process (sssd_be) to core dump. (sigsegv). I reported this to this group a few months ago. We're working this case with the Linux OS vendor. Turns out, if we explicitly add:
ldap_sasl_authid = host/<HOST>@<HOST's REALM>
to each [domain/XXX.COMPANY.COM] stanza in /etc/sssd/sssd.conf file, it no longer core dumps.
That is, we have these child AD domains defined in sssd.conf
[domain/AMER.COMPANY.COM]
[domain/EMEA.COMPANY.COM]
[domain/APAC.COMPANY.COM]
However, our host is registered in only one child domain. Say AMER for a server amerhost1 in North America. So we'd set:
ldap_sasl_authid = host/amerhost1@AMER.COMPANY.COM in each domain stanza above.
Why does this prevent sssd_be from core dumping? Not a clue! But sssd performs flawlessly once this is added.
Spike
On Thu, Aug 8, 2019 at 9:09 AM Spike White spikewhitetx@gmail.com wrote:
Here is the bugzilla link to the ticket:
https://bugzilla.redhat.com/show_bug.cgi?id=1738375
So it appears a BZ has been created.
Spike
On Tue, Jul 16, 2019 at 3:32 PM Jakub Hrozek jhrozek@redhat.com wrote:
On Tue, Jul 16, 2019 at 12:32:29PM -0500, Spike White wrote:
The following case has been opened with RHEL support on this. It was opened this morning:
(SEV 4) Case #02427449 ('realm permit group@DOMAIN' causing background process sssd_be to segfault.)
Thank you, comment added. I hope a BZ would be created soon. _______________________________________________ sssd-users mailing list -- sssd-users@lists.fedorahosted.org To unsubscribe send an email to sssd-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.o...
Don’t know if this is related, but for our puppet runs of ‘net ads’, had to add two environment variables as puppet didn’t set them, but ‘net ads’ expects them:
# Puppet doesnt provide USER and LOGNAME and net ads needs it export USER="$(id -un)" export LOGNAME="${USER}"
From: Spike White spikewhitetx@gmail.com Sent: Monday, September 16, 2019 3:47 PM To: End-user discussions about the System Security Services Daemon sssd-users@lists.fedorahosted.org Subject: [SSSD-users]Re: sssd_be core dumping when ‘realm permit’ command run under puppet control…
EXTERNAL MAIL: sssd-users-bounces@lists.fedorahosted.orgmailto:sssd-users-bounces@lists.fedorahosted.org All,
This was a case where 'realm permit' of a user was causing a back-end sssd process (sssd_be) to core dump. (sigsegv). I reported this to this group a few months ago. We're working this case with the Linux OS vendor. Turns out, if we explicitly add:
ldap_sasl_authid = host/<HOST>@<HOST's REALM>
to each [domain/XXX.COMPANY.COMhttp://XXX.COMPANY.COM] stanza in /etc/sssd/sssd.conf file, it no longer core dumps.
That is, we have these child AD domains defined in sssd.conf
[domain/AMER.COMPANY.COMhttp://AMER.COMPANY.COM]
[domain/EMEA.COMPANY.COMhttp://EMEA.COMPANY.COM]
[domain/APAC.COMPANY.COMhttp://APAC.COMPANY.COM]
However, our host is registered in only one child domain. Say AMER for a server amerhost1 in North America. So we'd set:
ldap_sasl_authid = host/amerhost1@AMER.COMPANY.COMmailto:amerhost1@AMER.COMPANY.COM in each domain stanza above.
Why does this prevent sssd_be from core dumping? Not a clue! But sssd performs flawlessly once this is added.
Spike
On Thu, Aug 8, 2019 at 9:09 AM Spike White <spikewhitetx@gmail.commailto:spikewhitetx@gmail.com> wrote: Here is the bugzilla link to the ticket:
https://bugzilla.redhat.com/show_bug.cgi?id=1738375
So it appears a BZ has been created.
Spike
On Tue, Jul 16, 2019 at 3:32 PM Jakub Hrozek <jhrozek@redhat.commailto:jhrozek@redhat.com> wrote: On Tue, Jul 16, 2019 at 12:32:29PM -0500, Spike White wrote:
The following case has been opened with RHEL support on this. It was opened this morning:
(SEV 4) Case #02427449 ('realm permit group@DOMAIN' causing background process sssd_be to segfault.)
Thank you, comment added. I hope a BZ would be created soon. _______________________________________________ sssd-users mailing list -- sssd-users@lists.fedorahosted.orgmailto:sssd-users@lists.fedorahosted.org To unsubscribe send an email to sssd-users-leave@lists.fedorahosted.orgmailto:sssd-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.o...
On Mon, Sep 16, 2019 at 05:47:04PM -0500, Spike White wrote:
All,
This was a case where 'realm permit' of a user was causing a back-end sssd process (sssd_be) to core dump. (sigsegv). I reported this to this group a few months ago. We're working this case with the Linux OS vendor. Turns out, if we explicitly add:
ldap_sasl_authid = host/<HOST>@<HOST's REALM>
to each [domain/XXX.COMPANY.COM] stanza in /etc/sssd/sssd.conf file, it no longer core dumps.
That is, we have these child AD domains defined in sssd.conf
[domain/AMER.COMPANY.COM]
[domain/EMEA.COMPANY.COM]
[domain/APAC.COMPANY.COM]
However, our host is registered in only one child domain. Say AMER for a server amerhost1 in North America. So we'd set:
ldap_sasl_authid = host/amerhost1@AMER.COMPANY.COM in each domain stanza above.
Hi,
it would be good to see some before and after debug logs.
If ldap_sasl_authid is not set SSSD tries to determine it from the keytab with a priority as given in the sssd-ldap man page:
hostname@REALM netbiosname$@REALM host/hostname@REALM *$@REALM host/*@REALM host/*
For a domain other than AMER.COMPANY.COM all patters with '@REALM' would not match since the realm in the keytab will be AMER.COMPANY.COM. The last entry would match 'host/amerhost1@AMER.COMPANY.COM' but maybe there is another matching entry before in the keytab which matches first? The logs would show which principal was selected with ldap_sasl_authid set.
What is a but puzzling is that by default 'host/amerhost1@AMER.COMPANY.COM' is a service principal and AD does not allow service principals for authentication. So I assume that you either added 'host/amerhost1@AMER.COMPANY.COM' to the userPrincipalName attribute of the host object or configured AD to allow service principals for authentication.
The second thing which is puzzling, if the wrong principal was chosen for authentication, authentication will just fail and the backend should switch into offline mode.
And finally, according to the case you've opened the crash happened in the process which handles the AMER.COMPANY.COM domain in not in one of the others which might have chosen a wrong principal.
So, if you can attach to the case the logs with 'debug_level=9' in all [domain/...] sections of sssd.conf once with ldap_sasl_authid set and once without if might help to understand why SSSD fails without ldap_sasl_authid set.
bye, Sumit
Why does this prevent sssd_be from core dumping? Not a clue! But sssd performs flawlessly once this is added.
Spike
On Thu, Aug 8, 2019 at 9:09 AM Spike White spikewhitetx@gmail.com wrote:
Here is the bugzilla link to the ticket:
https://bugzilla.redhat.com/show_bug.cgi?id=1738375
So it appears a BZ has been created.
Spike
On Tue, Jul 16, 2019 at 3:32 PM Jakub Hrozek jhrozek@redhat.com wrote:
On Tue, Jul 16, 2019 at 12:32:29PM -0500, Spike White wrote:
The following case has been opened with RHEL support on this. It was opened this morning:
(SEV 4) Case #02427449 ('realm permit group@DOMAIN' causing background process sssd_be to segfault.)
Thank you, comment added. I hope a BZ would be created soon. _______________________________________________ sssd-users mailing list -- sssd-users@lists.fedorahosted.org To unsubscribe send an email to sssd-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.o...
sssd-users mailing list -- sssd-users@lists.fedorahosted.org To unsubscribe send an email to sssd-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.o...
Yes, as you say -- our adcli invocation must add host/<fqdn>@<REALM> to the userPrincipalName.
Here's the attributes associated with a random server AD joined via adcli/sssd:
dn: CN=ACMORASTG01,OU=Servers,OU=UNIX,DC=amer,DC=company,DC=com cn: ACMORASTG01 distinguishedName: CN=ACMORASTG01,OU=Servers,OU=UNIX,DC=amer,DC=company,DC=com name: ACMORASTG01 sAMAccountName: ACMORASTG01$ dNSHostName: acmorastg01.company.com userPrincipalName: host/acmorastg01.company.com@AMER.COMPANY.COM servicePrincipalName: RestrictedKrbHost/acmorastg01.company.com servicePrincipalName: RestrictedKrbHost/ACMORASTG01 servicePrincipalName: host/acmorastg01.company.com servicePrincipalName: host/ACMORASTG01
I'll try to get the logs before and after, share them via dropbox.
Spike
On Mon, Sep 23, 2019 at 6:41 AM Sumit Bose sbose@redhat.com wrote:
On Mon, Sep 16, 2019 at 05:47:04PM -0500, Spike White wrote:
All,
This was a case where 'realm permit' of a user was causing a back-end
sssd
process (sssd_be) to core dump. (sigsegv). I reported this to this
group
a few months ago. We're working this case with the Linux OS vendor.
Turns
out, if we explicitly add:
ldap_sasl_authid = host/<HOST>@<HOST's REALM>
to each [domain/XXX.COMPANY.COM] stanza in /etc/sssd/sssd.conf file, it
no
longer core dumps.
That is, we have these child AD domains defined in sssd.conf
[domain/AMER.COMPANY.COM]
[domain/EMEA.COMPANY.COM]
[domain/APAC.COMPANY.COM]
However, our host is registered in only one child domain. Say AMER for a server amerhost1 in North America. So we'd set:
ldap_sasl_authid = host/amerhost1@AMER.COMPANY.COM in each domain
stanza
above.
Hi,
it would be good to see some before and after debug logs.
If ldap_sasl_authid is not set SSSD tries to determine it from the keytab with a priority as given in the sssd-ldap man page:
hostname@REALM netbiosname$@REALM host/hostname@REALM *$@REALM host/*@REALM host/*
For a domain other than AMER.COMPANY.COM all patters with '@REALM' would not match since the realm in the keytab will be AMER.COMPANY.COM. The last entry would match 'host/amerhost1@AMER.COMPANY.COM' but maybe there is another matching entry before in the keytab which matches first? The logs would show which principal was selected with ldap_sasl_authid set.
What is a but puzzling is that by default 'host/amerhost1@AMER.COMPANY.COM' is a service principal and AD does not allow service principals for authentication. So I assume that you either added 'host/amerhost1@AMER.COMPANY.COM' to the userPrincipalName attribute of the host object or configured AD to allow service principals for authentication.
The second thing which is puzzling, if the wrong principal was chosen for authentication, authentication will just fail and the backend should switch into offline mode.
And finally, according to the case you've opened the crash happened in the process which handles the AMER.COMPANY.COM domain in not in one of the others which might have chosen a wrong principal.
So, if you can attach to the case the logs with 'debug_level=9' in all [domain/...] sections of sssd.conf once with ldap_sasl_authid set and once without if might help to understand why SSSD fails without ldap_sasl_authid set.
bye, Sumit
Why does this prevent sssd_be from core dumping? Not a clue! But sssd performs flawlessly once this is added.
Spike
On Thu, Aug 8, 2019 at 9:09 AM Spike White spikewhitetx@gmail.com
wrote:
Here is the bugzilla link to the ticket:
https://bugzilla.redhat.com/show_bug.cgi?id=1738375
So it appears a BZ has been created.
Spike
On Tue, Jul 16, 2019 at 3:32 PM Jakub Hrozek jhrozek@redhat.com
wrote:
On Tue, Jul 16, 2019 at 12:32:29PM -0500, Spike White wrote:
The following case has been opened with RHEL support on this. It
was
opened this morning:
(SEV 4) Case #02427449 ('realm permit group@DOMAIN' causing
background
process sssd_be to segfault.)
Thank you, comment added. I hope a BZ would be created soon. _______________________________________________ sssd-users mailing list -- sssd-users@lists.fedorahosted.org To unsubscribe send an email to
sssd-users-leave@lists.fedorahosted.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives:
https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.o...
sssd-users mailing list -- sssd-users@lists.fedorahosted.org To unsubscribe send an email to sssd-users-leave@lists.fedorahosted.org Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives:
https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.o... _______________________________________________ sssd-users mailing list -- sssd-users@lists.fedorahosted.org To unsubscribe send an email to sssd-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.o...
Sumit,
Ok, I had time today to get all those logs you wanted. I have the /var/log/ssd/* logs from when it core dumps. And when it doesn't. All done at debug level = 9.
the core dump appears as below in /var/log/messages as so:
Sep 25 14:18:19 ol8test01 systemd-coredump[84828]: Process 84817 (sssd_be) of user 0 dumped core.#012#012Stack trace of thread 84817:#012#0 0x00007f5068a613c0 ad_get_account_domain_search (libsss_ad.so)#012#1 0x00007f5068a61552 ad_get_account_domain_connect_done (libsss_ad.so)#012#2 0x00007f506841ff82 sdap_id_op_connect_done (libsss_ldap_common.so)#012#3 0x00007f5068415f9a sdap_auth_done (libsss_ldap_common.so)#0 12#4 0x00007f50706560f9 tevent_common_invoke_immediate_handler (libtevent.so.0)#012#5 0x00007f5070656127 tevent_common_loop_immediate (libtevent.so.0)#012#6 0x00007f507065bf1f epoll_event_loop_once (libteven t.so.0)#012#7 0x00007f507065a1bb std_event_loop_once (libtevent.so.0)#012#8 0x00007f5070655395 _tevent_loop_once (libtevent.so.0)#012#9 0x00007f507065563b tevent_common_loop_wait (libtevent.so.0)#012#10 0x00 007f507065a14b std_event_loop_wait (libtevent.so.0)#012#11 0x00007f50738f7a07 server_loop (libsss_util.so)#012#12 0x000055fe299ae38b main (sssd_be)#012#13 0x00007f506fe46813 __libc_start_main (libc.so.6)#012#14 0x000055fe299ae54e _start (sssd_be) Sep 25 14:19:49 ol8test01 realmd[84809]: * /usr/bin/systemctl restart sssd.service Sep 25 14:19:49 ol8test01 systemd-logind[1226]: Failed to start session scope session-81343.scope: Process with ID 84885 does not exist. Sep 25 14:19:49 ol8test01 systemd[1]: Stopping System Security Services Daemon...
So the segfault occurs at 14:18:19 in the /var/log/sssd/* logs.
I included the good sssd.conf file and the bad sssd.conf file. The only difference is in the bad sssd.conf file, each [domain/XXXX] stanza has these lines removed:
ldap_sasl_authid = XXX ldap_search_base = XXX
But still -- that shouldn't cause a segfault.
Here's the dropbox links to the log tarballs and the sssd.conf files.
https://www.dropbox.com/sh/4pvsnlo7ab8azt6/AAAXkBg99wCd-A6tZsxJZm33a?dl=0
BTW, this occurs only on RHEL8. With that same sssd.conf file on RHEL7, it does not segfault.
Spike
On Mon, Sep 23, 2019 at 2:48 PM Spike White spikewhitetx@gmail.com wrote:
Yes, as you say -- our adcli invocation must add host/<fqdn>@<REALM> to the userPrincipalName.
Here's the attributes associated with a random server AD joined via adcli/sssd:
dn: CN=ACMORASTG01,OU=Servers,OU=UNIX,DC=amer,DC=company,DC=com cn: ACMORASTG01 distinguishedName: CN=ACMORASTG01,OU=Servers,OU=UNIX,DC=amer,DC=company,DC=com name: ACMORASTG01 sAMAccountName: ACMORASTG01$ dNSHostName: acmorastg01.company.com userPrincipalName: host/acmorastg01.company.com@AMER.COMPANY.COM servicePrincipalName: RestrictedKrbHost/acmorastg01.company.com servicePrincipalName: RestrictedKrbHost/ACMORASTG01 servicePrincipalName: host/acmorastg01.company.com servicePrincipalName: host/ACMORASTG01
I'll try to get the logs before and after, share them via dropbox.
Spike
On Mon, Sep 23, 2019 at 6:41 AM Sumit Bose sbose@redhat.com wrote:
On Mon, Sep 16, 2019 at 05:47:04PM -0500, Spike White wrote:
All,
This was a case where 'realm permit' of a user was causing a back-end
sssd
process (sssd_be) to core dump. (sigsegv). I reported this to this
group
a few months ago. We're working this case with the Linux OS vendor.
Turns
out, if we explicitly add:
ldap_sasl_authid = host/<HOST>@<HOST's REALM>
to each [domain/XXX.COMPANY.COM] stanza in /etc/sssd/sssd.conf file,
it no
longer core dumps.
That is, we have these child AD domains defined in sssd.conf
[domain/AMER.COMPANY.COM]
[domain/EMEA.COMPANY.COM]
[domain/APAC.COMPANY.COM]
However, our host is registered in only one child domain. Say AMER for
a
server amerhost1 in North America. So we'd set:
ldap_sasl_authid = host/amerhost1@AMER.COMPANY.COM in each domain
stanza
above.
Hi,
it would be good to see some before and after debug logs.
If ldap_sasl_authid is not set SSSD tries to determine it from the keytab with a priority as given in the sssd-ldap man page:
hostname@REALM netbiosname$@REALM host/hostname@REALM *$@REALM host/*@REALM host/*
For a domain other than AMER.COMPANY.COM all patters with '@REALM' would not match since the realm in the keytab will be AMER.COMPANY.COM. The last entry would match 'host/amerhost1@AMER.COMPANY.COM' but maybe there is another matching entry before in the keytab which matches first? The logs would show which principal was selected with ldap_sasl_authid set.
What is a but puzzling is that by default 'host/amerhost1@AMER.COMPANY.COM' is a service principal and AD does not allow service principals for authentication. So I assume that you either added 'host/amerhost1@AMER.COMPANY.COM' to the userPrincipalName attribute of the host object or configured AD to allow service principals for authentication.
The second thing which is puzzling, if the wrong principal was chosen for authentication, authentication will just fail and the backend should switch into offline mode.
And finally, according to the case you've opened the crash happened in the process which handles the AMER.COMPANY.COM domain in not in one of the others which might have chosen a wrong principal.
So, if you can attach to the case the logs with 'debug_level=9' in all [domain/...] sections of sssd.conf once with ldap_sasl_authid set and once without if might help to understand why SSSD fails without ldap_sasl_authid set.
bye, Sumit
Why does this prevent sssd_be from core dumping? Not a clue! But sssd performs flawlessly once this is added.
Spike
On Thu, Aug 8, 2019 at 9:09 AM Spike White spikewhitetx@gmail.com
wrote:
Here is the bugzilla link to the ticket:
https://bugzilla.redhat.com/show_bug.cgi?id=1738375
So it appears a BZ has been created.
Spike
On Tue, Jul 16, 2019 at 3:32 PM Jakub Hrozek jhrozek@redhat.com
wrote:
On Tue, Jul 16, 2019 at 12:32:29PM -0500, Spike White wrote:
The following case has been opened with RHEL support on this. It
was
opened this morning:
(SEV 4) Case #02427449 ('realm permit group@DOMAIN' causing
background
process sssd_be to segfault.)
Thank you, comment added. I hope a BZ would be created soon. _______________________________________________ sssd-users mailing list -- sssd-users@lists.fedorahosted.org To unsubscribe send an email to
sssd-users-leave@lists.fedorahosted.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives:
https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.o...
sssd-users mailing list -- sssd-users@lists.fedorahosted.org To unsubscribe send an email to sssd-users-leave@lists.fedorahosted.org Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives:
https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.o... _______________________________________________ sssd-users mailing list -- sssd-users@lists.fedorahosted.org To unsubscribe send an email to sssd-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/sssd-users@lists.fedorahosted.o...
sssd-users@lists.fedorahosted.org