Sunday 27 July 2014

SharePoint 2013 Service Account creation script and how I write scripts.

In this post I am going to share a PowerShell script that I wrote called Create-SPServiceAccounts.ps1 for creating the accounts needed for a SharePoint Server 2013 installation. There are many similar scripts, but I don't like how they are written. Also I have added an additional options in my script for Input file with the data for the accounts (account name,description and password), it can read CSV,XML and Excel files for input. I have also added the ability accounts to be created remotely by running the script from computer in the domain or not. I think that the script is well written and have some useful patterns and techniques for writing a readable, reusable PowerShell scripts. In this post I will do an overview how I am writing scripts and will try to explain some of the techniques I used in the Service Account creation script.

What tool  I am using for writing scripts?

Most of the time I am using a free third party ISE (Integrated Scripting Environment) tool called PowerShell Plus Professional by Idera . It has some great feature like code highlighting and it is easy to run and debug scripts. A great feature is that you can copy the code as HTML. I find this very useful because I don't use any code highlighters in this blog, instead I just copy my code from the tool and paste it in the HTML source of the posts. The tool comes with big script library for Windows,AD,SharePoint,SQL and many more. But it has some limitations and I also use the native PowerShell ISE and of course the powershell console to run and debug.

How I am reading three different types of Input files?

A bit of a background. One of the best practices for PowerShell scripting is modularity. Every piece of code with different  functionality that will be reused is organised in functions with own parameters and scope. The code from this script that do initial checks and logic is only 25 rows, but the functions are 222 rows.
So I have three different functions for reading different type of Input files. The script will check what is the file extension and will decide what function to use for reading the input file.

The function for CSV:

function Read-CSV{
 [CmdletBinding()]
Param (
    [parameter(Mandatory=$true)]
    [string]$Path,
    [parameter(Mandatory=$false)]
    [string]$DefaultPass
)
$csvImput = Import-Csv -Path $Path
$OutputAll = @()
ForEach($row in $csvImput)
{
    $Output = @{
        "AccountName" = $row.AccountName
        "Description" = $row.Description
        "Password" = $row.Password
    }
    If($DefaultPass){
        $Output['Password'] = $DefaultPass
    }
    If (Test-AccountData -Data $Output){
        $OutputAll += $Output
    }

}
Write-Output $OutputAll
}

The key for reading csv files is the native cmdlet Import-Csv. It will read the file and will create an object for each row(without the column names) in the csv with properties named as the csv columns and corresponding value. Then I am building a hash table for each object and the function returns a collection of hash tables for every account that needs to be created.

The function for XML:

function Read-XML{
 [CmdletBinding()]
Param (
    [parameter(Mandatory=$true)]
    [string]$Path,
    [parameter(Mandatory=$false)]
    [string]$DefaultPass
)
[xml]$xmlInput = Get-Content $Path
$OutputAll = @()
ForEach ($xElement in ($xmlInput.ServiceAccounts.Account))
{
    $Output = @{
        "AccountName" = $xElement.AccountName
        "Description" = $xElement.Description
        "Password" = $xElement.Password
    }
    If($DefaultPass){
        $Output['Password'] = $DefaultPass
    }
    If (Test-AccountData -Data $Output){
        $OutputAll += $Output
    }

}
Write-Output $OutputAll
}

For reading XML I am creating new XmlDocument object by reading the content of the xml file. Then we can work with every xmlElement that describes an account and take it's properties. Again as in the other functions for every account we are creating hash table and when we read all the information the function will return us a collection of hash tables.

The function for Excel files:

function Read-Excel{
 [CmdletBinding()]
Param (
    [parameter(Mandatory=$true)]
    [string]$Path,
    [parameter(Mandatory=$false)]
    [string]$DefaultPass
)
$objExcel = New-Object -ComObject Excel.Application
$objExcel.Visible = $false
$WorkBook = $objExcel.Workbooks.Open($Path)
$WorkSheet = $WorkBook.sheets | Where {$_.Index -eq '1'}
$intRowMax = ($WorkSheet.UsedRange.Rows).count
$OutputAll = @()
for($intRow = 2 ; $intRow -le $intRowMax ; $intRow++)
{
    $Output = @{
        "AccountName" = $WorkSheet.Range("A$($intRow)").Text
        "Description" = $WorkSheet.Range("B$($intRow)").Text
        "Password" = $WorkSheet.Range("C$($intRow )").Text
    }
    If($DefaultPass){
        $Output['Password'] = $DefaultPass
    }
    If (Test-AccountData -Data $Output){
        $OutputAll += $Output
    }
}
Write-Output $OutputAll
$objExcel.Quit()
(Get-Process -name excel -ErrorAction SilentlyContinue | Sort-Object StartTime)[-1] | Stop-Process -ErrorAction SilentlyContinue
Remove-Variable objExcel
}

For reading Excel file(.xlsx) we will need Excel office application installed. The script is creating a COM Excel application instance, opening the file, choosing the first workbook. Then unfortunately we need to read the content of each cell, as far as I know there is no way to treat every row as an object or something similar so we can take its content like in the CSV. So we are taking the count of the rows, skipping the first and we read every cell and creating a hash table. This is done in For loop where it depend what is the number of the row we are reading. And again the function will return a collection of hash table. All function will output the same object(s), so after the processing the Input file we will not care what type it was we will just process every hash table in our collection.
For the record, killing the Excel.exe process is not the correct way of removing COM object instance, but I had some weird behavior when I used below code to get rid of the COM instance.

[System.Runtime.Interopservices.Marshal]::ReleaseComObject($x)

All three functions are very similar, they have similar parameters and outputs, so we we will not care for input/output when we read different input file.
For more information on PowerShell function you can read about_Functions, about_Functions_Advanced , about_Functions_Advanced and all about files from the PowerShell Help.

Why so many Try/Catch constructions?

Try/Catch/Finally constructions are preferred way for error handling in PowerShell. This statements were introduced in v2.0, before that there was only available Trap statement. The error handling is big subject in Powershell, I will try to explain how Try/Catch/Finally works in example for creating new AD user.


Try{
    New-ADUser -SamAccountName "JohnD1111111111111111111" -Name "John Doe" -AccountPassword $secPass -DisplayName "John Doe" -ErrorAction Stop
}
Catch [System.Exception]{
    Write-Host "Unable to create Account, folowing exception occurred: $($_.Exception.Message)" -ForegroundColor Red
}

In this sample we will run the block in Try statement. If there is an error/exception the block in the 'Catch' statement will be executed. I have not included 'Finally' statement, but if there was its code will be executed anyway no matter if we have exception or not and it is not required. In our example the New-ADUser will fail because the value for SamAccountName is with length more than 20 symbols. However the 'Catch' block will be triggered only if the error produced in the 'Try'  is Terminating. Most of the PowerShell commandlets produce non-Terminating errors, that means that if the command fails the script/loop/pipeline will continue.Our cmdlet for creating new AD user also produces non-Terminating error, this is why we have explicitly set ErrorAction to Stop, this will make the error Terminating, The whole idea of error handling is to identify if we have error and do something. In this sample we will not see the error that the cmdlet produces, instead we will execute Catch and it will show us customer message that contains the message of the original exception and will tell us what is wrong, without seeing the entire exception from PowerShell. And because the error is in Try/Catch construction the whole script will not stop and will continue.

How locally defined function is executed on remote computer?

You can launch the account creation script from the DC that you want to create the accounts in or you can run it from completely different computer and create the accounts in the target DC remotely.
This is achieved very easy, I am just executing the function that creates the accounts locally on our remote DC and also remoting the parameters that I will use. The line looks like this:

Invoke-Command -Session $session -ScriptBlock ${function:Create-UsersLocal} -ArgumentList $Hash,$OUnit

Our function for creating accounts when the script run on the DC is Create-UsersLocal, I am just promoting it via the ScriptBlock and its parameter values via the ArgumentList.
And now explained. Let say that we have a functions called Hello-World, a very simple function that just say hello from the computer it runs in different color. The color of the text can be set as parameter. If we load the function it goes to a PSDrive called 'Function'. We can see on the screenshot below that this PSDrive contains our function. If we supply just the path to the function it will execute it on the remote computer as it is and we will fail if the function is not defined in the remote session. Our goal is to get the definition of the FunctionInfo object, this is done by adding dollar sign when supplying the script block for the remote session, this will force reading the definition of the function and convert it to useful script block for the remote session we are also transferring a variable with the same name as the parameter of the function and this is how the magic happens. You can see on the screenshot below.



For more information on PowerShell scripting I highly recommend to visit http://powershell.org/ especialy their Free eBooks section.

You can download my script from the    Technet Gallery

Friday 18 July 2014

Get Hidden Service Instances in SharePoint [Tip]

I am not sure if you know, but there are service instances in SharePoint that you cannot see in the UI no matter if you filter the "Services on Server" view by Configurable or All. You cannot see them even with the PowerShell command Get-SPServiceInstance. You can get "all" service instances by getting the property of SPServer object. Here is the PowerShell line to get them:

(Get-SPFarm).Servers['<ServerName>'].ServiceInstances

The state of the service, if it will be visible or not depends on property of the service instance object that can be True or False. It is the 'Hidden' property. Here is the result from my SharePoint Server 2013 dev. farm when I look for hidden service instances.


In the result you can see some services that are exception from the general rule and are visible in the UI like SharePoint Server Search, there are also services like SSP Job Control Service this is a very important service that is in charge for the configuration synchronization on the different servers in our farm and it is related to Application server administration service Timer Job, but this is a long story for our "short tip" post.

Wednesday 16 July 2014

Download Farm Solution from the Solution Management[Tip]

This short post is from category "short tips". Recently I needed the actual solution package (.wsp) from one SharePoint 2013 Farm. Unfortunately the only thing we had was a Farm solution uploaded to the Solution Manager. Here is a short script to get the actual wsp package that is already uploaded.


$farm = Get-SPFarm
$file = $farm.Solutions.Item('mysolution.wsp').SolutionFile
$file.SaveAs('c:\temp\mysolution.wsp')

This should also work on SharePoint 2010.

Monday 7 July 2014

Delete SharePoint List/Library when Remove/Delete option is missing.

This week I received a question from one of our customers and decided to share the solution because you can find it useful if you get into similar situation like mine. The background story is that this customer is using SharePoint 2013 for their intranet. They have custom solution with two site definitions(templates) that solves two different business needs.
The reported issue was that on the sites created with one of the definitions the users are able to delete 'Documents' library, but on the sites with the second definition the users are unable to do so. The Remove button is missing, like the picture below.


Both sites come with 'Documents' library when they are created.
I did some investigation and found that the library in the first template comes from the site definition. The site is designed to be deployed with such library. In the second template where the user cannot delete the library it comes from the web level feature 'SharePoint Server Publishing' that is activated in the RootWeb, this is also by definition design. You can also see this in the description of the library. Along with this library the 'SharePoint Server Publishing' feature creates some important lists and libraries like 'Pages' and 'Workflow Tasks' and couple more. By default this lists and libraries cannot be deleted, this is why the 'Remove' option is missing.
The ability a SharePoint list (lists and libraries are same object in SharePoint OM) to be removable or not depends by the value of property of the SPList object called AllowDeletion. If the value is True the list can be deleted if it is False you cannot delete it.
I explained this to the customer, but they required to have the ability to delete this particular library from the site in question. Luckily we can set the value of AllowDeletion in existing List or Library.
Be very careful when you delete such lists it can break your site! But in my case I decided that if the library is not used by the users or for site assets I can safely delete the 'Documents' library from one of the sites. Below is the PowerShell script I used to change the value of AllowDeletion property.


$web = Get-SPWeb('http://publish.auto.l/')

$list = $web.Lists['Documents']

$list.AllowDeletion = $true

$list.Update()


After this the Remove/Delete options are available in their usual locations Site Content, Content and structure, ..etc.