This guide contains instructions for enabling LDAP authentication in Zenoss Core 4.2+ on a relatively clean install of CentOS 6 (64-bit).

Assumptions

  • you are running CentOS 6
  • you have installed Zenoss Core 4.2+ using the autodeploy script

Before You Begin

It’s recommended that you backup your Zenoss configuration, either through a VM snapshot (if that’s an option) or via the backup tool (Advanced -> Backups). You may also want to back up your acl_users settings as follows:

  1. Go to https://YOUR_ZENOSS_SERVER/zport/manage and log in as admin.
  2. Click acl_users in the tree view on the left side of the page.
  3. Click Import/Export.
  4. Leave “Export object id” blank, select dumpfile location, then click Export.

Install Required Auth Plugins

Download LDAPMultiPlugins, LDAPUserFolder, and python-ldap. The versions used as of time of writing this guide are as follows:

  • LDAPMultiPlugins 1.14
  • LDAPUserFolder 2.24
  • python-ldap 2.4.10

Copy the downloaded tarballs to the Zenoss server.

Next, install the prerequisite packages.

# yum install gcc python-devel openssl-devel openldap-devel

Then, use easy_install to install the three packages you downloaded above. (Note: You must use the easy_install tool if you installed Zenoss using the autodeploy script.)

# su - zenoss
zenoss@zenprod:~$ su
Password:
# cd ~/build
# easy_install Products.LDAPMultiPlugins-1.14.tar.gz
...
# easy_install Products.LDAPUserFolder-2.24.tar.gz
...
# easy_install python-ldap-2.4.10.tar.gz
...

Restart Zope.

zenoss@zenprod:~$ zopectl restart

Configure the LDAP Multi Plugin

  1. Go to https://YOUR_ZENOSS_SERVER/zport/manage and log in as admin.
  2. Click acl_users in the tree view on the left side of the page.
  3. Select LDAP Multi Plugin from the dropdown list and click Add.
  4. Configure the plugin. (Note: your configuration may vary depending on what you want to do, i.e. if you will be assigning roles based on LDAP groups or not.)

ID: <enter an ID>
Title: <enter a title>
LDAP Server: YOUR_LDAP_SERVER
check Use SSL if necessary
check Read-only
Login Name Attribute, User ID Attribute, RDN Attribute: UID (uid)
Users Base DN: YOUR_BASE_DN
select Groups not stored on LDAP server
Groups Base DN: <blank>
Manager DN: <blank>
User password encryption: SHA
Default User Roles: <blank>

  1. Click acl_users then click the LDAP config you just created from the list.
  2. Check the boxes next to “Authentication”, “User_Enumeration”, and “Role_Enumeration”.

At this point, you should be able to log in to Zenoss using credentials from LDAP.

Configure Authorization

To configure Zenoss role mappings from LDAP groups, please see this post: http://community.zenoss.org/message/30124#30124

Restricting Zenoss access to a subset of specific users

  1. Go to https://YOUR_ZENOSS_SERVER/zport/manage and log in as admin.
  2. Click acl_users in the tree view on the left side of the page.
  3. Click roleManager.
  4. Click Add a Role and enter “ZenNone” for the ID, then save.
  5. Click acl_users in the tree view on the left side of the page.
  6. Click your LDAP config.
  7. Select the Contents tab.
  8. Click acl_users in the list.
  9. Change Default User Roles to “ZenNone” and apply changes.
  10. Click acl_users in the tree view on the left side of the page.
  11. Click roleManager.
  12. Select the Security tab.
  13. Check all the checkboxes under Manager, Owner, and ZenManager. (IMPORTANT! If you do not do this step, you will lock your admin account out of the system!)
  14. Uncheck all the checkboxes under Acquire permission settings?
  15. Check the checkboxes for “Access contents information” and “View” under ZenUser.
  16. Click Save Changes.

When finished, users who are in LDAP are given restricted access (via the ZenNone role) by default, unless they have been granted a different Zenoss role. You can edit Zenoss role assignments via Zope manager -> acl_users -> roleManager.

Last night we upgraded from HP StoreVirtual LeftHand OS 10.0.00.1896 to 10.5.00.0149

When it came time for the Central Management Console (CMC) to reboot each node our Linux and Windows hosts noticed their respective gateway connection disappear. Each host retried once and got a new gateway connection from one of the remaining nodes in the cluster and all was well. This manifested in the logs of the affected hosts as follows:

Windows host:

3/26/2013 11:59:54 PM – Error Event ID 20 – iScsiPrt – Connection to the target was lost. The initiator will attempt to retry the connection.
3/26/2013 11:59:55 PM – Error Event ID 1 – iScsiPrt – Initiator failed to connect to the target. Target IP address and TCP Port number are given in dump data.
3/26/2013 11:59:59 PM – Informational Event ID 34 – iScsiPrt – A connection to the target was lost, but Initiator successfully reconnected to the target. Dump data contains the target name.

Linux Host:

03/27 00:08:15 iscsid: connection3:0 is operational after recovery (1 attempts)
03/27 00:08:14 kernel: [22973221.827841] connection3:0: detected conn error (1020)
03/27 00:08:12 iscsid: Kernel reported iSCSI connection 3:0 error (1020) state (3)
03/27 00:08:12 kernel: [22973219.322744] connection3:0: detected conn error (1020)

Several of our hosts were unlucky and randomly received a new gateway connection on a node that had yet to reboot as part of the LeftHand OS update. They then had a second event where the same thing happened again when it was time for the new node to reboot, leading them to receive yet another gateway connection.

What is interesting is our VMware ESXi 5.1 hosts did not notice their respective gateway connections drop or disappear through out the reboots of each StoreVirtual cluster.

Throughout the entire LeftHand OS upgrade no customer affecting service was impacted and all hosts kept on serving.

On Thursday the 28th we will be working with NOC to try and resolve an issue that arose when the 15 subnet was placed behind the new firewall.  Starting at 10pm NOC will re-enable the other member of the VPC and then implement the fix.  We will then verify that things are working as expected.  The estimate is for about 10 minutes total.  During this 10 minute window mail will not be sent, but queued on the webheads.  Those that use proxy.oregonstate.edu will not be able to view the Homepage or any other CWS hosted sites.

Start: 3/28/2013 @ 2200

End: 3/28/2013 @2230

If you have questions or concerns about this maintenance, please contact the Shared Infrastructure Group at osu-sig (at) oregonstate.edu or call 737-7SIG.

** Maintenance Announcement – No service interruption anticipated **

On Saturday the 30th at 10pm we will be upgrading the Netscalers to version 10.  Since they are in HA mode no outages or downtime are expected.  In the unlikly event of problems changes will be rolled back and the maintenance will be scheudled for a later date.

Start: 03/30/2013 2200

End: 03/30/2013 2259

If you have questions or concerns about this maintenance, please contact the Shared Infrastructure Group at osu-sig (at) oregonstate.edu or call 737-7SIG.

We will be working with NOC to add vlan 1140 and remove vlan 3817 to our blade centers trunk. This will be done to one trunk port at a time, and one data center at a time.

Start Time: 3/23/13 at 9:00 pm

End Time: 3/23/13 at 9:30 pm

If you have any questions or concerns about this maintenance, please contact OSU-SIG ( at ) oregonstate.edu or call 7-help

** Maintenance Announcement – No service interruption anticipated **

We will be moving our Milne Blade Center from mccnet101 to nexus gear. We will migrate non-redundant VMs to KAD b210. During the migration we expect to drop 1~2 packets when we cut over from the active fiber pair on mccnet101 to the nexus gear. Though we will do not expect a service interruption as all critical VMs will be moved to our other data center first.

Start: 4/27/2013 10pm

End: 4/27/2013 10:30pm

If you have questions or concerns about this maintenance, please contact the Shared Infrastructure Group at osu-sig (at) oregonstate.edu or call 737-7SIG.

** Maintenance Announcement – No service interruption anticipated **

We will be migrating from our old vCenter servers to our new vCenter servers. There will be no interruption of service. During the move if Administrators need to gain console access to their VMs they may need to check both vCenter servers to find their VMs. After the moves are complete Administrators will access consoles from the NEW vCenter server.

The new server to connect to is: vcenter.sig.oregonstate.edu.

Regular remote access mechanisms like ssh or remote desktop to the VMs will be unaffected. All VMs and their services will continue to run as normal. There should be no customer impact.

Start: 03/30/2013 9:00 PM

End: 03/30/2013 23:59 PM

If you have questions or concerns about this maintenance, please contact the Shared Infrastructure Group at osu-sig (at) oregonstate.edu or call 737-7SIG.

** Maintenance Announcement – DEV VM service interruption anticipated **

We are upgrading our StoreVirtual firmware from  LeftHand OS 10.0 -> LeftHand OS 10.5. During the upgrade of the storage that provides the DEV VMware cluster will be unavailable, as such the Dev VMware cluster will also be shut down.

Production SANs are redundant and maintenance will have no noticeable effect on these SANs, meaning that the OSU systems used by students, staff, and faculty will not experience a service interruption.

Start Time: 03/26/2013 at 10:30 PM

End Time:  03/27/2013 at 4:00 AM

If you have questions or concerns about this maintenance, please contact the Shared Infrastructure Group at osu-sig (at) oregonstate.edu or call 737-7SIG.

** Maintenance Announcement – No service interruption anticipated **

We will be moving node1-site2 and node2-site2 from rack mcc-b5 to rack mcc-b6. We are working at a standard rack layout and this will aid in bringing mcc-b6 closer to our anticipated standard for racks with blade centers.

Start: 03/26/2013 9:00 PM

End: 03/26/2013 10:30 PM

If you have questions or concerns about this maintenance, please contact the Shared Infrastructure Group at osu-sig (at) oregonstate.edu or call 737-7SIG.

** Maintenance Announcement – Partial service interruption anticipated **

We will be patching the OS’s and installing the latest VMware tools against our vCenter server infrastructure. During the first part of this maintenance window support personal will not be able to access vCenter. Because of this support personal will not have remote console access. For example, if a VM has a kernel panic then the support personal will have to wait until maintenance ends to resolve the issue.

Regular remote access mechanisms like ssh or remote desktop to the VMs will be unaffected. All VMs and their services will continue to run as normal. There should be no customer impact.

Start: 03/09/2013 10:00 PM

End: 03/09/2013 11:59:30 PM

If you have questions or concerns about this maintenance, please contact the Shared Infrastructure Group at osu-sig (at) oregonstate.edu or call 737-7SIG.