EMC VNXe Performance Analysis with PowerShell Part II

I appreciate the positive feedback I have received from the VNXePerformance module so far. I thought I would add to it and provide a script to generate a basic report. The script can be downloaded here.

The script will produce an HTML report and associated graphics with the following information.

  • Capacity Information system and pools(Total and Allocated)
    • Maximum, Minimum, Average, Median
    • Historical graphs for system and each pool
  • Bandwidth usage per protocol
    • Maximum, Minimum, Average, Median
    • Historical graphs
  • IOPS usage per protocol
    • Maximum, Minimum, Average, Median
    • Historical graphs

Here is a sample

The previous post used PowerGadgets for the charting functionality. This tool is not free and it is also not yet supported with PowerShell 3.0. To correct this issue I provided a function in this reporting script which uses the charting functionality in the .Net 4.0 framework. While this fixes the two issues mentioned it does require more work to use, but it will work well for our purposes here. This script uses the VNXePerformance.ps1 module from my previous post and a few new functions to produce an html report and associated graphic files. A command line example to run the script is shown below.

The script uses data provided by the VNXePerformance module and the functions in the script to format and write the report data. Here is a brief description of the functions used.

Out-DataTable – this function is used to convert the PSObject data provided as output from the module functions to the system.data.datatable type. This is required for databinding to produce charts.

Out-LineChart – This function provides chart generating functionality to produce a line chart based on provided datatable and generate a .png graphic file.

Get-SeriesRollup – This function creates summary data (maximum, minimum, average, median) for series data.

The following functions create HTML report output

  • ConvertTo-SeriesRollupHTML
  • Write-ChartHTML
  • Write-BlankHTMLTable
  • Write-HeaderHTMLTable

The first part of the script defines parameters, loads charting assembly, contains the functions declarations and module import.

The next portion of the sets the location of the SQLite database and begins the HTML report string.

The next portion of the script completes the report by using the VNXePerformance module to retrieve object data then output HTML using the script functions.

The final portion of the script closes out the html file and writes it to disk.

This should provide a good starting point to use for reporting. It has much room for improvement. Everyone please comment with information discovered about the SQLite data and information added to the report.

Start-VNXeHTMLPerformanceReport.zip
Regards,

Dave

EMC VNXe Performance Analysis with PowerShell

Hello again, It has been a long time, but the day job takes precedence. Recently, I have been working on a little pet project I thought I would share. The EMC VNXe is a great entry level storage array which has been popular with SMB and remote office branch office deployments in the enterprise. I was recently asked to look at performance on one of these arrays. While investigating this request I discovered the VNXe has reduced functionality around monitoring compared to big brother, VNX. Performance information is collected differently and the VNXe does not currently have the ability to utilize Unisphere Analyzer.

While searching on the web I found the following article http://henriwithani.wordpress.com/2011/12/01/hidden-vnxe-performance-statistics/ . Henri shows where to find additional performance statistics on the VNXe. He also shows how to export the data to CSV for reporting. The VNXe performance data is stored in a public domain C library based SQL database called SQLite. While investigating the use of the SQLite database and tools I discovered there is an ADO.Net provider assembly available for SQLite. Since I am somewhat partial to PowerShell and any .NET object is accessable from PowerShell. I thought I would take a shot at building a PowerShell Module for looking at VNXe performance data.

Here is what I came up with, the VNXePerformance.psm1 PowerShell module. The module currently consists of 21 CmdLets. One CmdLet is to set the location of the SQLite database tables and the remaining 20 are to return data stored in those tables. The CmdLets currently work with the following three SQLite tables:
  • capacity.db
  • stats_basic_summary.db
  • stats_basic_default.db

These three tables appear to hold most of the interesting information so I concentrated on them first. The primary purpose of the module is to query the database and return PowerShell objects from the data. A secondary purpose of the module is to add calculated properties to some of those returned objects. The calculated properties currently implemented provide bytes per second for each of the three IP storage protocols supported NFS, CIFS, and iSCSI and also read and write IO’s per second. This was the first info I needed and was pretty simple to figure out. I plan to add additional calculated fields in the future as I learn more about the data.

Using the module

Install the SQLite ADO.Net Provider Assembly

The VNXePerformance module requires the SQLite ADO.Net Provider to be installed, which can be downloaded from the following page http://system.data.sqlite.org/index.html/doc/trunk/www/downloads.wiki
I recommend using the setup package for the framework and OS version you are using.

The next step is to download the module, unzip it to the modules folder. The proper modules folder can be identified with the $env:PSModulePath variable.

Download the module VnxePerformance.zip

Unzip the module to the PowerShell modules folder, open PowerShell and verify module is available. The VNXePerformance module should be listed when executing the get-module -ListAvailable command, if installed in the proper location.

Then set the following line to the location of the System.Data.SQLite.dll and save changes as necessary. The location defined in the module is the default installation path for the assembly if installed with the prepackaged installation.

We are now ready to use the module so open a PowerShell prompt and import the module.

We can now use the get-command cmdlet to see the new functionality provided by our module.

We can also use the get-help CmdLet to find out the functionality provided by each function. A particulary helpful piece of info in the function help is the example field list and data.

Now we can set the location to our downloaded SQLite database. Please see the following post http://henriwithani.wordpress.com/2011/12/01/hidden-vnxe-performance-statistics/ to learn how to retrieve these files. The command is as follows.

Now we can use the other functions to retrieve data. The following example will give use the last record for the System_Totals table in the capacity.db database.

This also enables some powerful one line commands such as the following which uses a charting tool called PowerGadgets to produce a historical chart that shows total space and allocated space for the system. PowerGadgets is a pay tool which I use, but there are other free ways to do charting such as the MSChart controls included in the .Net framework.

SystemSpace

The following one line command is executed on one of the calculated properties to return basic stats on LUNDiskReadsPerSec

After combining some of these techniques and creating a script. The following simple report was produced for a VNXe system in a lab. The system is only configured for iSCSI and has a single performance pool. The report shows statistics and graphs on Capacity, iSCSI bandwitdh, Disk IO, and Cache.

SystemSpace
PoolSpace
DartBandwidth
FlareIOA
FlareIOB
CReadA
CReadB

The report above shows us some good basic information about this array. The information about other protocols has been omitted as it did not apply to this system. This system has a very consistent and repetitive workload which is primarily read. The read IOPS of approximately 8000 is seems high for this system, but this is also due to the fact that a large portion of these reads are being served from cache. We see a Read Hit Cache ratio of around 60 which shows the majority of reads are being served from cache which would allow for high IO performance. This system is in a lab and has some VM’s running on it so I assume there is a simulator or something of that sort providing the uniform load.

I hope someone finds this useful and if so please provide feedback on potential improvements. I will try to make periodic updates to the module and also post additional examples here. The next topic will be more on the scripting to utilize the module and produce reports.

I would like to end by saying EMC does not provide any documentation on the SQLite database that I could find. So please use the information and provided code with caution and without warranty of any kind. If anyone from EMC reads this and is willing to provide me with documentation on the SQLite database, particularly the field value definitions/explanations, it would be greatly appreciated.

Regards,

Dave

VMware SRM using HDS AMS 2000

I have been working recently configuring VMWare SRM using Hitachi AMS 2000 arrays. This has not been the most straightforward or well documented process. I hope this post will save someone else a little time.

In this post I will focus on the configuration and the required prerequisites for the HDS SRA 2.0 for VMWare SRM. During testing I used ESX 4.0 update1, vCenter 4.0 update1 and SRM 4.01.

When working with this solution there are some obvious pieces of documentation such as the following:

Hitachi Storage Replication Adapter Software VMware vCenter Site Recovery Manager Deployment Guide

Site Recovery Manager Administration Guide

There are also some other helpful pieces of documentation which may not be as obvious, such as:

Hitachi AMS 2000 Family Command Control Interface (CCI) Installation, Reference, and User’s Guides. These can be found on the HDS Support portal.

Implementing VMware Site Recovery Manager with Hitachi Enterprise Storage Systems Whitepaper

How To Debug CCI Issues 1.3 article on the HDS GSC website

Also, the sample horcm.conf file installed with the CCI has some good information in the comments.

Now on to the configuration.

Prerequisites

Before configuration of the HDS SRA is possible the following requirements must be in place:

  • Two HDS AMS2000 arrays connected by WAN (FC or iSCSI)
  • One VMWare vCenter installation in the primary/protected site
  • One VMWare vCenter installation in the secondary/recovery site
  • One VMWare SRM installation in the primary/protected site
  • One VMWare SRM installation in the secondary/recovery site
  • TrueCopy replication in place for LUN’s with protected datastores
  • SRM Sites paired

Test Environment

Our test environment consists of one cluster at the primary site and one at the secondary site. We have a test SharePoint environment which is stored across four VMFS datastores. See the diagram below.

Test Environment

Our goal for this test configuration will be to failover the SharePoint environment to the recovery site. We are replicating the LUN’s containing all of the SharePoint system data using TrueCopy Extended Distance. Also SRM is installed at both sites and the sites have been paired.

After the items above are in place we can move on to the configuring the SRA.

HDS SRA 2.0 Configuration

When configuring the storage replication adapter the primary documentation is the deployment guide referenced above. I thought there were a few things missing from the document.

The first step in getting the SRA configured is making sure you have a copy of the proper HDS CCI for your array firmware. The HDS SRA relies on the Hitachi Command Control Interface, which must be installed on the SRM servers. I installed the CCI in the default c:HORCM directory on both SRM servers. This is a straightforward install and is documented in the Hitachi AMS 2000 Family Command Control Interface (CCI) Installation Guide.

The portion of the CCI install that was tough for me was determining it needed to be installed as a service and creating the horcm.conf files. Our above example will only require two instances of the horcm service. One on each SRM server they will be HORCM0 and HORCM1.

To create the services we create the horcm_run.txt files and issue the following commands.

On the first SRM server – create the c:HORCMToolhorcm0_run.txt file and the execute the following command:
    C:HORCMtoolsvcexe /S=HORCM0 /A=C:HORCMToolsvcexe.exe

On the second SRM server – create the c:HORCMToolhorcm1_run.txt file and the execute the following command:
    C:HORCMtoolsvcexe /S=HORCM1 /A=C:HORCMToolsvcexe.exe

The horcmx_run.txt is created by making a copy of the file naming it appropriately and setting the HORCMINST variable to the correct instance number. This is documented in the file located in HORCMTool

After running these commands you should see the services appear in the windows services MMC

Then add the following lines to the %systemroot%driversetcservices file on each SRM server.

The file should appear as below with one blank line below the last horcm service

Once the services are installed the next step is to create the horcm.conf files. The first thing we need to do this is a command device. The SRA deployment guide left this step out. This is documented in the VMWare SRM with Enterprise Storage whitepaper mentioned earlier. Basically you create a small LUN and present it to the SRM server as a physical compatibility RDM. Then you initialize the disk and create a basic primary partition, but do not assign a drive letter or format it. I found one HDS document that said this LUN should be 33MB and one that said 36MB so I made it 40MB. Once this is done we have all that is needed to create the horcm.conf files.

HORCM0.conf on the SRM server at primary site

HORCM1.conf on the SRM server at secondary site

A couple of points to note on these files is the relationship between the hosts and devices in the group. The HORCM_LDEV section on each instance contains a reference to the half of the pair it controls. The HORCM_INST section contains a reference to the opposite instance in each file. Also the command device naming format. It consists of “.CMD-” followed by the array serial number and the LUN number “.CMD-11111112-4”. Now that we have the configuration files we copy them into the %windir% on their respective SRM servers and start the services.

After this is complete we can install the SRA. This is downloaded from the VMWare website and the executable is named RMHTCSRA.exe. It is a simple, no option, install. After this there are some environment variable which need to be set.

setx SplitReplication true /m
setx RMSRATMU 1 /m

Then reboot the SRM servers. We are now ready to configure the SRA using the SRM plug-in in vCenter.

Here we see the paired sites in site recovery.

 SRM Sites

Click on configure array managers and we see the following dialog.

 SRA Config 1

Click Add to add a protected site array manager and we see the following configuration dialog.

 SRM Config 2

We enter the name and HORCMINST=0 for the first instance on the primary server in the protected site. The we use a different name and HORCMINST=1 for the next instance on the secondary server in the recovery site. Here we see both sides configured.

 SRA Final ConfigSRA Final Config 2

The last step in the wizard allows us to confirm the SRA sees the replicated datastores properly.

 Replicated Datastores

We see the LUN numbers match the devices in the horcm<x>.conf files. The datastore group in this diagram consists of four LUN’s which also belong to the same TCE consistency group. These are the LUN’s being used by our test SharePoint application and database servers.

At this point we are now ready to complete configuration of the protection groups and recovery plans in Site Recovery Manager. The process for configuring these is documented in the SRM administration guide. Protection groups are configured at the protected site and recovery plans are configured at the recovery site. Here is a screenshot of the test recovery plan for our SharePoint environment.

Recovery Plan 1 

When we run a test on this recovery plan we can see the test runs successfully and waits for us to complete testing before clicking continue to return to a ready state.

Recovery Plan 2 

During this phase we can look at a couple of things to confirm what is happening in the process. One is the new datastores we will see in the configuration tab of the DR ESX hosts.

Datastore Snapshots/Replicas 

There was no need to change the LVM.EnableResignature or LVM.DisallowSnapshotLun settings at the host level in ESX 4 as this is enabled at the volume level and SRM handles this at the time of testing or failover. Another part of the process we can confirm at this time is the status of the TrueCopy pairs. Here we see the pairs are in split status.

TrueCopy Split Status

Now we can complete any other testing to confirm success of the test and then click continue in the recovery plan to return to a ready state. After the test completes we can see the datastores are removed from the recovery ESX hosts and the TCE pairs are returned to a paired status after resynchronization.

After the testing process is completed we can review some of the steps in the SRM logs. The logs are located under %allusersprofile% VMwareVMware vCenter Site Recovery ManagerLogs. These log entries and the HORCM logs under c:HORCMlog are the primary sources of information in troubleshooting problems with this process.

I hope someone finds this post useful. Next I am going to be testing this with secondary copies at the recovery site using Shadowimage and Copy-on-Write.
 

Regards,

Dave

HDS AMS 2000 Storage Resource Reporting with PowerShell

Welcome!

I have been creating a few PowerShell scripts for use with the HDS AMS 2000 array. One thing I found I needed was a quick way to look at DP(Dynamic Provisioning) pools, raid groups, and LUNs. I also wanted to be able to see the associations and filter easily. Since I am working on a new deployment I have been creating Raid Groups and Luns often. I needed a quick way to see what I currently had while creating new resources.

I created a PowerShell script that would quickly show existing resources by raid group or DP pool. It also uses nickname info for the devices that are maintained in three csv files(LU_Nicknames.csv,RG_Nicknames.csv,DP_Nicknames.csv). These are simple comma delimited text files which contain the ID and nickname of each resource. The files are updated as storage resources are added. This allows me to easily identify the resources and to filter for specific devices.

The script executes three HSNM2 CLI commands and reads the information into object form. The LUN information is then shown grouped by raid group or DP pool.

Here is the output with the nickname search parameter set to “DB”. This will return all database resources based on the naming standard. If this is left null it will return all resources.

Here is the script:

The script uses the start-session.ps1 file to establish connectivity with the HDS array. Additional information regarding the use of this include file can be found at this post. Then the script executes HSNM2 CLI commands to return information on DP pools raid groups and LUNS. The script uses regular expressions to parse the output and convert it into objects. It also reads in the nickname files and adds the data to the custom objects.

The objects are then output using the built-in PowerShell formating engine with a little custom formating thrown in for the group headers.

Here is an example nickname file:

I suppose this may not be necessary with the use of Device Manager, but I am still learning it and I could not quite get this view with it. Besides I am more of a scripting kind of guy. I also really like the output of this script as it gives me just the view of the array I need when I am allocating new storage and setting up new resources. I use this script in conjuction with two other scripts for creating LUN’s and raid groups. I plan to post those scripts soon.

Hope this helps,

Dave

HDS AMS 2000 Performance Analysis with PowerShell and PowerGadgets

Welcome!

It has been a while since my last post due to a very busy schedule with a SAN and virtualization project.

I have been working on an implementation of a HDS AMS 2500 midrange array for a VMWare vShere 4 environment. So far everything has been working and performing well. The management software included with the HDS AMS 2000 series array is SNM2(Storage Navigator Modular 2), A java based web application. This software also has a command line version which appears to be pretty comprehensive. It consists of a series of DOS executables, which can be run from PowerShell. There are a series of scripts I have been working on for viewing and creating storage resources on the array. I will share many of these in future posts. In this post I want to share some scripts I have written to extend the functionality of the performance monitoring utility in SNM2.

The base functionality of the array allows you to capture performance statistics to a text file. The file can be captured manually or automatically for a specified time period and interval down to one minute. One text file is produced per capture or all captures can be written to one file. Also, I believe based on the information I read in the SNM2 manual you can do some graphing with the web interface, but it requires an additional license and personally I think the PowerGadgets graphs are better.

The 4 scripts I have started with are get-performance_processor.ps1, get-performance_ports.ps1, get-performance_raidgroups.ps1, get-performance_luns.ps1, which do pretty much what they say and produce the following PowerGadgets charts.

Chart

The chart group is a tabbed interface which allows you to tab through the controllers and ports/RG/LU/Procs depending on the script being used. Each script generates different groups of charts for different performance counters. I have not implemented all of the performance counters just the ones which are most important to me now. I will be improving these scripts over time and implementing more counters. Here is an example of how the script works.

Script

After executing the script it will ask whether or not to collect data, if yes it will prompt for interval in minutes and time period. If no it will use previously collected data in the default output directory. Next it will ask to list data in text output. Then it will prompt for generation of each group of charts for ports, raid groups, luns or processors depending on the script run.

Now to the script. All of the scripts rely on the start-session.ps1 script and also require a password file be set for logging into the array. Additionally, an array has to be registered.

Example 1 shows a PowerShell script which will register an array and set the admin password.

You will need to replace ARRAYNAME, USERNAME and the IP Addresses for your environment.

 Example 2 shows the start-session PowerShell script which defines environmental information.

You will need to change the paths and ARRAYNAME for your environment.

Example 3 shows the get-performance_processor.ps1 script

This script collects the data from the array in separate files. Reads the pertinent data from the files and transforms it into object form which is fed into the PowerGadgets out-chart cmdlet. The other three scripts are longer as they digest more information.

To use these scripts you will need PowerShell, PowerGadgets( this a pay product with a free trial ), SNM2 CLI, and the script files attached to this post. Oh and an HDS AMS 2000 array.

Here are the script dowmloads
start-Session.txt
get-performance_processor.txt
get-performance_ports.txt
get-performance_raidgroups.txt
get-performance_luns.txt

Save the files to your script directory and change the extensions to .ps1

I hope someone finds this useful.

Regards,

Dave

Exchange DB Reporting with PowerShell and Log Parser

I ran across a useful post today as I was roaming through Google Analytics.

Using PowerShell, LogParser and PowerGadgets to get Exchange 2003 storage information – Part 1

Wes Stahler uses Log Parser and PowerShell to report on the free space in an Exchange Database.

This is a task I have done in the past. I will add this script to my toolkit.

Regards,

Dave

ADAM Administration with SharePoint and PowerShell

Welcome!

Recently I worked on finding a simple way to create a web based administrative interface for an ADAM directory. The requirements were to create a simple web based interface to allow business personnel to manage users and groups for an application directory. It was also desirable if this solution would easily integrate with SharePoint.

After doing a little searching on the web, I found a combination that fit the bill.

The Quest AD Management Shell CmdLets – This is a PowerShell Snap-In that allows administration of AD and ADAM. The cmdlets are from Quest Software you can find more info here. I have used them in other scripts and they have come in very handy. To make these work in this solution from SharePoint, the Quest Snapin .dll and it’s dependants need to be copied to the global assembly cache and entered as a safecontrol in the sharepoint web.config.

The iLoveSharePoint PowerWebPart 3.0 – This is a web part which allows the execution of PowerShell code from the web part. This web part is from the CodePlex project iLoveSharePoint by Christian Glessner. I was impressed with this web part. It is easy to install and configure and relatively simple to use.

The PowerWebPart allows you to execute scripts that will render asp.net web controls in the web part. This allows you to retrieve user input from the controls to use as script inputs. The possibilities are endless. For my purposes I only needed a very simple user interface.

I wanted a way to use this for different ADAM partitions so I tried to allow for different configuration scripts. The design I decided on consisted of three levels of scripts one for configuration one for data access and one UI script for each web part.

The code sample below is the configuration and connection script. This script defines the user and group containers and the directory connect and disconnect functions.

The next script is the function library for data access to the ADAM directory.

The next script is an example of a UI script for the web part. When a new PowerWebPart is created a template script is added by default. This script provides a framework and some sample code. Christian also has an add-on which allows you to use PowerGui to edit your script from SharePoint. The entire solution contains one script similar to this for each web part.

The screenshot below shows the complete solution. This method was simple, effective and easy to create. I dot sourced the corresponding web part script and the connection script in each web part.

This is a pretty quick and easy way to expose some simple administrative or user functionality on a SharePoint Intranet.

I hope this helps.

Regards,

Dave

Connection History with PowerShell and NetStat

Welcome!

This is a little trick some might find useful. I was working on decommissioning some servers and I needed a way to find out what was connecting to these machines. I decided to create a script to log connections. I have done this in the past in various ways which usually involved logging a bunch of data and then querying against it to find the unique connections.

This time it finally occurred to me, just filter the data as it is being collected. So I set out to write a PowerShell script that would keep a running list of client TCP connections to a given machine. This information would be stored in a text file.

The first step was to collect the information and put it into a PowerShell object.

Then the next step was to read the file with the previous information and add it to the PowerShell object.

We can now remove the duplicates from the combined information and save the updated file.

We can run this script in a scheduled task at whatever interval is required. Now we have a log of unique inbound TCP connections.

Best Regards,

Dave

PowerShell, Log Parser, PowerGadgets, and GeoIP what fun!

Welcome!

One day I was pondering how I might use log parser to map visitors to a website by state. I am aware this is easily done with tools like Google Analytics, but I was interested in using existing logs for the info.

Using the PowerShell and Log Parser functions from the library listed in a previouse post. Log parser can easily get the visitors by IP Address from an IIS log.

The next task is to get the location of the IP addresses. The tool I chose for this task was the free GeoLite City from MaxMind http://www.maxmind.com/app/geolitecity. Here is an example:

There are a couple of ways to use the MaxMind GeoIP database. It can be used in it’s native binary format or it can be imported into SQL from csv files. MaxMind recommends using the binary format, which is what I chose to do. MaxMind also provides API’s for use with a variety of platform’s. I chose to use the COM version.

After the location is determined the counts are calculated. This brings us to the point where we need to chart the results. The tool I chose for this operation is PowerGadgets. This is a charting tool made for use with PowerShell, it can be handy. Here is an example:

 

And here is our final Chart.

 

This works pretty well the only drawback to this solution is that PowerGadgets is a pay tool, but since I own a copy it suites my needs.

Regards,

Dave

Automated Machine Builds with PowerShell and AutoIT

I have been using PowerShell recently to create scripted builds for virtual machines. This method provides some benefits over imaging.

  • A single base image per OS can be used
  • Applications which have issues with imaging can be easily installed such as SQL Server, BizTalk, Exchange, Etc.
  • Scripted installs can be changed or updated easier than images
  • Additional configuration changes such as registry settings and/or configuration files can be easily done


One challenge I faced with this process was automating the installation of applications which do not have unattended install capabilities.

A great tool I found for this was AutoIT. It provides a method to script the windows GUI and enables automating those installs. AutoIT consists of a COM object, scripting language, editor, compiler, and a cool window info tool. I will not get into the details of AutoIT here it has good documentation. I found PowerShell to be much stronger at the scripting part, but the COM object provides excellent window control functionality to PowerShell. The window info tool that comes with AutoIT is also very handy. Here is a simple example:

To run the code above you will need to install AutoIT or just register the AutoItX.dll COM object. The code will launch notepad and type “This is a test”. While this is a pretty useless example it shows the basic purpose of AutoIT and how it is used from PowerShell.

The next piece I looked at was creating a function library to make it easy to use AutoIT from PowerShell.

Example: AutoIT PowerShell function library

Then using the functions from the library above I created application installation functions for various packages.

Example: PowerShell Application Installation Function

Using the method above I created a library of PowerShell installation functions to be called from automated build scripts.

This seems to be working pretty well so far and it allows automated builds to be scripted quickly. PowerShell and AutoIT work well together, the AutoIT COM object provides excellent GUI control and PowerShell provides strong scripting and debugging features.

Best Regards,

Dave