20 Kasım 2012 Salı

Deploying Unix/Linux Agents using OpsMgr 2012

Microsoft started including Unix and Linux monitoring in OpsMgr directly in OpsMgr 2007 R2, which shipped in 2009.  Some significant updates have been made to this for OpsMgr 2012.  Primarily these updates are around:
  • Highly available Monitoring via Resource Pools
  • Sudo elevation support for using a low priv account with elevation rights for specific workflows.
  • ssh key authentication
  • New wizards for discovery, agent upgrade, and agent uninstallation
  • Additional Powershell cmdlets
  • Performance and scalability improvements
  • New monitoring templates for common monitoring tasks

This article will cover the discovery, agent deployment, and monitoring configuration of a Linux server in OpsMgr 2012.  I am going to run through this as a typical user would – and show some of the pitfalls if you don’t follow the exact order of configuration required.

So what would anyone do first?  They’d naturally run a discovery, just like they do for Windows agents.  However – this will likely end up in frustration.  There are several steps that you need to configure FIRST, before deploying Unix/Linux agents.

High Level Overview:

The high level process is as follows:
  • Import Management Packs
  • Create a resource pool for monitoring Unix/Linux servers
  • Configure the Xplat certificates (export/import) for each management server in the pool.
  • Create and Configure Run As accounts for Unix/Linux.
  • Discover and deploy the agents

Import Management Packs:

The core Unix/Linux libraries are already imported when you install OpsMgr 2012, but not the detailed MP’s for each OS version.  These are on the installation media, in the \ManagementPacks directory.  Import the specific ones for the Unix or Linux Operating systems that you plan to monitor.

Create a resource pool for monitoring Unix/Linux servers
The FIRST step is to create a Unix/Linux Monitoring Resource pool.  This pool will be used and associated with management servers that are dedicated for monitoring Unix/Linux systems in larger environments, or may include existing management servers that also manage Windows agents or Gateways in smaller environments.  Regardless, it is a best practice to create a new resource pool for this purpose, and will ease administration, and scalability expansion in the future.
Under Administration, find Resource Pools in the console:

OpsMgr ships 3 resource pools by default:

Let’s create a new one by selecting “Create Resource Pool” from the task pane on the right, and call it “Unix Linux Monitoring Resource Pool”


Click Add and then click Search to display all management servers.  Select the Management servers that you want to perform Unix and Linux Monitoring.  If you only have 1 MS, this will be easy.  For high availability – you need at least two management servers in the pool.

Add your management servers and create the pool.  In the actions pane – select “View Resource Pool Members” to verify membership.


Configure the Xplat certificates (export/import) for each management server in the pool
Operations Manager uses certificates to authenticate access to the computers it is managing. When the Discovery Wizard deploys an agent, it retrieves the certificate from the agent, signs the certificate, deploys the certificate back to the agent, and then restarts the agent.
To configure high availability, each management server in the resource pool must have all the root certificates that are used to sign the certificates that are deployed to the agents on the UNIX and Linux computers. Otherwise, if a management server becomes unavailable, the other management servers would not be able to trust the certificates that were signed by the server that failed.
We provide a tool to handle the certificates, named scxcertconfig.exe.  Essentially what you must do, is to log on to EACH management server that will be part of a Unix/Linux monitoring resource pool, and export their SCX (cross plat) certificate to a file share.  Then import each others certificates so they are trusted.
If you only have a SINGLE management server, or a single management server in your pool, you can skip this step, then perform it later if you ever add Management Servers to the Unix/Linux Monitoring resource pool.

In this example – I have two management servers in my Unix/Linux resource pool, MS1 and MS2.  Open a command prompt on each MS, and export the cert:
On MS1:
C:\Program Files\System Center 2012\Operations Manager\Server>scxcertconfig.exe -export \\servername\sharename\MS1.cer
On MS2:
C:\Program Files\System Center 2012\Operations Manager\Server>scxcertconfig.exe -export \\servername\sharename\MS2.cer
Once all certs are exported, you must IMPORT the other management server’s certificate:
On MS1:
C:\Program Files\System Center 2012\Operations Manager\Server>scxcertconfig.exe –import \\servername\sharename\MS2.cer
On MS2:
C:\Program Files\System Center 2012\Operations Manager\Server>scxcertconfig.exe –import \\servername\sharename\MS1.cer
If you fail to perform the above steps – you will get errors when running the Linux agent deployment wizard later.

Create and Configure Run As accounts for Unix/Linux

Next up we need to create our run-as accounts for Linux monitoring.   This is documented here:  http://technet.microsoft.com/en-us/library/hh212926.aspx

We need to select “UNIX/Linux Accounts” under administration, then “Create Run As Account” from the task pane.  This kicks off a special wizard for creating these accounts.


Lets create the Monitoring account first.  Give the monitoring account a display name, and click Next.


On the next screen, type in the credentials that you want to use for monitoring the Linux system(s).


On the above screen – you have two choices.  You can provide a privileged account for handling monitoring, or you can use an existing account on the Linux system(s) that is not privileged.  Then – you can specify whether or not you want this account to be able to leverage sudo elevation.  Since I am providing a privileged account in this case – I will tell it to not use elevation.
On the next screen, always choose more secure:

Now – since we chose More Secure – we must choose the distribution of the Run As account.  Find your “Linux Monitoring Account” under the UNIX/Linux Accounts screen, and open the properties.  On the Distribution Security screen, click Add, then select "Search by resource pool name” and click search.  Find your Unix/Linux monitoring resource pool, highlight it, and click Add, then OK.  This will distribute this account credential to all Management servers in our pool:


We would repeat the above process, as many times as necessary for the number of different accounts we need.  If all our Linux systems use the same credentials, then we need at a minimum, ONE monitoring account that is privileged, and it can be associated to the three Run As Profiles (covered in next section).
However, what would be more typical, if all our systems had the same credentials and passwords, is to use THREE Run As accounts:
  • One for for Unprivileged (do not use elevation) monitoring
  • One for Privileged monitoring using EITHER a priv account (do not use elevation), OR a unpriv account using sudo (use elevation)
  • One for Agent Maintenance using EITHER a priv account (do not use elevation), OR a unpriv account using sudo (use elevation)
For the purposes of this demo, I am just going to create a SINGLE priv Run As account (root) that I will use for all three scenarios.

Next up – we must configure the Run As profiles.  This is covered here:  http://technet.microsoft.com/en-us/library/hh212926.aspx

There are three profiles for Unix/Linux accounts:


The agent maintenance account is strictly for agent updates, uninstalls, anything that requires SSH.  This will always be associated with a privileged account that has access via SSH, and was created using the Run As account wizard above, but selecting “Agent Maintenance Account” as the account type.  We wont go into details on that here.
The other two Profiles are used for Monitoring workflows.  These are:
Unix/Linux Privileged account
Unix/Linux Action Account
The Privileged Account Profile will always be associated with a Run As account like we created above, that is Privileged (root or similar) OR a unprivileged account that has been configured with elevation via sudo.  This is what any workflows that typically require elevated rights will execute as.
The Action account is what all your basic monitoring workflows will run as.  This will generally be associated with a Run As account, like we created above, but would be used with a non-privileged user account on the Linux systems.
***A note on sudo elevated accounts:
  • sudo elevation must be passwordless.
  • requiredtty must be disabled for the user.

For my example – I am keeping it very simple.  I created a single Run As account, of the Monitoring type, which is the privileged root account and password credential.  I will associate this Run As account to BOTH the Privileged and Action account.  This will make all my workflows (both normal monitoring and elevated monitoring) run under this credential.  This is not recommended as the “lowest priv” design, but being leveraged in this example just to keep things simple.  Once we validate it is working, we can go back and change this configuration and experiment using low priv and sudo enabled elevation accounts, and associate them independently.
For more information on configuring sudo elevation for OpsMgr monitoring accounts, including some sample configurations for your sudoers files for each OS version:  http://social.technet.microsoft.com/wiki/contents/articles/7375.configuring-sudo-elevation-for-unix-and-linux-monitoring-with-system-center-2012-operations-manager.aspx

I will start with the Unix/Linux Action Account profile.  Right click it – choose properties, and on the Run As Accounts screen, click Add, then select our “Linux Monitoring Account”.  Leave the default of “All Targeted Objects” and click OK, then save.
Repeat this same process for the Unix/Linux Privileged Account profile.
Repeat this same process for the Unix/Linux Agent Maintenance Account profile.

Discover and deploy the agents

Run the discovery wizard.


Click “Add”:

Here you will type in the FQDN of the Linux/Unix agent, its SSH port, and then choose All Computers in the discovery type.  ((We have another option for discovery type – if you were manually installing the Unix/Linux agent (which is really just a simple provider) and then using a signed certificate to authenticate))

Now – hit “Set Credentials”.  If we do not want to provide a root account here, and wanted to use SSH key authentication, we support that on this screen now.  For this example – I will simply type in my root account in order to use SSH to discover and deploy the Linux agent.


Notice above that you can tell the wizard if the account is privileged or not.  Here is an explanation:
  • A privileged account is a user account that has root-level access, including access to security logs and read, write, and execute permissions for the directories in which the Operations Manager agent is installed.
  • An unprivileged account is a normal user account that does not have root-level access or special permissions. However, an unprivileged account allows monitoring of system processes and of performance data.
If you have to discover only UNIX and Linux computers that already have an agent installed, rather than installing an agent, you can use an unprivileged user account on the UNIX or Linux computer. If you have to install an agent, you must use a privileged account. If you do not have a privileged account, you can elevate an unprivileged account to a privileged account provided that the su or sudo elevation program has been configured on the UNIX or Linux computer for the user account.

So – if we had pre-installed the agent already – we could simply use an unprivileged account to authenticate and discover the system, bringing it into OpsMgr.
Or – we could provide an unprivileged account that was allowed elevation via a pre-existing sudo configuration on the Linux server.


Click save.  On the next screen – select a resource pool.  We will choose the resource pool that we already created.


Click Discover, and the results will be displayed:


Check the box next to your discovered system – and deploy the agent.


This will take some time to complete, as the agent is checked for the correct FQDN and SSL certificate, the management servers are inspected to ensure they all have trusted SCX certificates (that we exported/imported above) and the connection is made over SSH, the package is copied down, installed, and the final certificate signing occurs.  If all of these checks pass, we get a success!

There are several things that can fail at this point.  See the troubleshooting section at the end of this article.

Monitoring Linux servers:

Assuming we got all the way to this point with a successful discovery and agent installation, we need to verify that monitoring is working.  After an agent is deployed, the Run As accounts will start being used to run discoveries, and start monitoring.  Once enough time has passed for these, check in the Administration pane, under Unix/Linux Computers, and verify that the systems are not listed as “Unknown” but discovered as a specific version of the OS:


Next – go to the Monitoring pane – and select the “Unix/Linux Computers” view at the top.  Look that your systems are present and there is a green healthy check mark next to them:


Next – expand the Unix/Linux Computers folder in the left tree (near the bottom) and make sure we have discovered the individual objects, like Linux Server State, Linux Disk State, and Network Adapter state:


Run Health explorer on one of the discovered disks.  Remove the filter at the top to see all the monitors for the disk:


Close health explorer. 
Select the Operating System Performance view.   Review the performance counters we collect out of the box for each monitored OS.


Out of the box – we discover and apply a default monitoring template to the following objects:
  • Operating System
  • Logical disk
  • Network Adapters
Optionally, you can enable discoveries for:
  • Individual Logical Processors
  • Physical Disks
I don’t recommend enabling additional discoveries unless you are sure that your monitoring requirements cannot be met without discovering these additional objects, as they will reduce the scalability of your environment.

Out of the box – for an OS like RedHat Enterprise Linux 5 – here is a list of the monitors in place, and the object they target:


There are also 50 rules enabled out of the box.  46 are performance collection rules for reporting, and 4 rules are event based, dealing with security.  Two are informational letting you know whenever a direct login is made using root credentials via SSH, and when su elevation occurs by a user session.  The other two deal with failed attempts for SSH or SU.

To get more out of your monitoring – you might have other services, processes, or log files that you need to monitor.  For that, we provide Authoring Templates with wizards to help you add additional monitoring, in the Authoring pane of the console under Management Pack templates:


In the reporting pane – we also offer a large number of reports you can leverage, or you can always create your own using our generic report templates, or custom ones designed in Visual Studio for SQL reporting services.


As you can see, it is a fairly well rounded solution to include Unix and Linux monitoring into a single pane of glass for your other systems, from the Hardware, to the Operating System, to the network layer, to the applications.
Partners and 3rd party vendors also supply additional management packs which extend our Unix and Linux monitoring, to discover and provide detailed monitoring on non-Microsoft applications that run on these Unix and Linux systems.


The majority of troubleshooting comes in the form of failed discovery/agent deployments.

Microsoft has written a wiki on this topic, which covers the majority of these, and how to resolve:

  • For instance – if your DNS name that you provided does not match the DNS hostname on the Linux server, or match it’s SSL certificate, or if you failed to export/import the SCX certificates for multiple management servers in the pool, you might see:


Agent verification failed. Error detail: The server certificate on the destination computer (rh5501.opsmgr.net:1270) has the following errors:
The SSL certificate could not be checked for revocation. The server used to check for revocation might be unreachable.

The SSL certificate is signed by an unknown certificate authority.
It is possible that:
1. The destination certificate is signed by another certificate authority not trusted by the management server.
2. The destination has an invalid certificate, e.g., its common name (CN) does not match the fully qualified domain name (FQDN) used for the connection. The FQDN used for the connection is: rh5501.opsmgr.net.
3. The servers in the resource pool have not been configured to trust certificates signed by other servers in the pool.

The server certificate on the destination computer (rh5501.opsmgr.net:1270) has the following errors:
The SSL certificate could not be checked for revocation. The server used to check for revocation might be unreachable.
The SSL certificate is signed by an unknown certificate authority.
It is possible that:
1. The destination certificate is signed by another certificate authority not trusted by the management server.
2. The destination has an invalid certificate, e.g., its common name (CN) does not match the fully qualified domain name (FQDN) used for the connection. The FQDN used for the connection is: rh5501.opsmgr.net.
3. The servers in the resource pool have not been configured to trust certificates signed by other servers in the pool.

The solution to these common issues is covered in the Wiki with links to the product documentation.

  • Perhaps – you failed to properly configure your Run As accounts and profiles.  You might see the following show as “Unknown” under administration:


Or you might see alerts in the console:

Alert:  UNIX/Linux Run As profile association error event detected
The account for the UNIX/Linux Action Run As profile associated with the workflow "Microsoft.Unix.AgentVersion.Discovery", running for instance "rh5501.opsmgr.net" with ID {9ADCED3D-B44B-3A82-769D-B0653BFE54F9} is not defined. The workflow has been unloaded. Please associate an account with the profile.
This condition may have occurred because no UNIX/Linux Accounts have been configured for the Run As profile. The UNIX/Linux Run As profile used by this workflow must be configured to associate a Run As account with the target. 
Either you failed to configure the Run As accounts, or failed to distribute them, or you chose a low priv account that is not properly configured for sudo on the Linux system.  Go back and double-check your work there.

If you want to check if the agent was deployed to a RedHat system, you can provide the following command in a shell session:

Kevin Holman

12 Kasım 2012 Pazartesi

System Center 2012 Update Rollup 3 (UR3) Released!

Update Rollup 3 for System Center Operations Manager 2012 (KB2750631)

Issue 1
When you use the 32-bit version of Windows Internet Explorer to start a web console, the Microsoft.EnterpriseManagement.Presentation.Controls.SpeedometerGaugeUIController controller does not work correctly.

Issue 2

When you run a Windows PowerShell cmdlet, you receive the following error message:
Get-BPAModel is not recognized as the name of a cmdlet.
Issue 3

When you try to change a URL in the "web application availability monitoring" template instance, the change is not applied.

Update Rollup 3 for System Center Operations Manager 2012 (KB2750631)
Collapse this imageExpand this image
DownloadDownload the Operations Manager update package now.
To manually install the update rollup packages, run the following command from an elevated command prompt:

msiexec.exe /update <PackageName>
For example, to install Update Rollup 3 for System Center Operations Manager 2012 (KB2750631), run the following command:

msiexec.exe /update KB2750631-AMD64-Server.msp

8 Kasım 2012 Perşembe

File system error or corruption

Monitor Name : File system error or corruption
Description      : Monitors whether the file system has reported an error with the file system or corruption on the logical disk.
Monitor Target:Windows Server 2008 Logical Disk


NTFS has reported that the logical disk is either corrupted or completely unavailable. Some data stored on the volume may be inaccessible.


A logical disk may become corrupted or inaccessible due to a number of reasons some of which include:
A physical disk related to the logical disk has been removed or failed
A physical disk related to the logical disk has become corrupt (for example; bad sectors) or inoperable
The disk driver may've encountered an issue


Check the status of your hardware for any failures (for example, a disk, controller, cabling failure). In most cases, the system log contains additional events from the lower-level storage drivers that indicate the cause of the failure.
After you have isolated and resolved the hardware problem:
1. Open the Disk Management snap-in.
2. Rescan the disks and then reactivate any disks with errors.
Resynchronize or regenerate the volume as necessary if the disk was a member of a mirrored or RAID-5 volume.
3. Run chkdsk on any reactivated volumes.

You check this yourself:
  1. Run\Open WBEMTEST
  2. Connect... root\CIMV2
  3. Select Query..., and paste in:  “select * from Win32_LogicalDisk where (DriveType=3 or DriveType=6) and FileSystem != nullApply
  4. Whichever drive letter is red : select that one by double click.
  5. On the right side – click Show MOF
  6. Scroll all the way down in the list to “VolumeDirty
If VolumeDirty = TRUE, then this monitor will be “BAD”.
What this means is – at some point this volume got a NTFS error, or was removed from the OS in a critical manner.  It *requires* a Chdksk /f to be run against this volume to restore VolumeDirty to a FALSE condition.
So – if you see these – you can double check this by running the simple WMI query… and then just schedule a Chkdsk on the volume during the next available maintenance window.