2 ways to retrieve content of Azure Automation Runbook within another runbook

Hi,

In this post, we’ll see how we can retrieve the content of an azure automation runbook from within another runbook. This content is loaded in memory.

There are 2 ways to do the same.

  1. Using the Get-Content cmdlet
  2. Using Export-AzureRmAutomationRunbook cmdlet

Using Get-Content Cmdlet

Consider, for example, the current runbook is Runbook1.ps1. The standard way is to use this cmdlet is to just pass the name of the runbook as shown below

Get-Content ./Runbook2.ps1

!!Caveat here!!

Now for the above cmdlet to execute correctly, the Runbook2 needs to be present within the sandbox environment at the current path where the current runbook (Runbook1.ps1) is running. This is NOT done by default when a runbook starts. Azure Automation DOES NOT get all the runbooks into the sandbox before running any. It only retrieves the ones that it sees are being INVOKED from the parent runbook.

Azure Automation parses the parent runbook and looks for any “invocations” in the code. Since Get-Content ./Runbook1.ps1 doesn’t look like an invocation, the runbook2 won’t be available and the cmdlet will return an error as shown below.

Get-Content : Cannot find path 'C:\Temp\pdur5g3d.hvp\runbook2.ps1' because it does not exist.

How to fix this

In order to fix the above issue, we need to find some way to trick Azure Automation into thinking that the parent runbook is going to invoke the second runbook. We can do this using the below snippet.

if ($false)
{
    ./Runbook2.ps1
}

Get-Content ./Runbook2.ps1

The line of code inside the IF block looks like a script invocation and so Azure Automation will ensure that this runbook is present in the sandbox before running the parent runbook and now the Get-Content will retrieve the automation runbook2.ps1 content successfully. The parser is not intelligent enough to understand that this code will never execute.

Using Export-AzureRmAutomationRunbook cmdlet

Another way to retrieve the content of the runbook is to first export the runbook to a local folder within the sandbox and then use Get-Content cmdlet directly to retrieve the content. The cmdlet syntax looks like this

$ScriptFolder = "C:\Scripts"

New-Item -itemtype Directory -Path $ScriptFolder -Force 

Export-AzureRmAutomationRunbook `
    -ResourceGroupName $AutomationResourceGroup `
    -AutomationAccountName $AutomationAccount `
    -Name "Runbook2" `
    -AzureRmContext $SubscriptionContext `
    -OutputFolder $ScriptFolder -Force

Get-Content -Path (Join-Path $ScriptFolder "Runbook2.ps1")

This will export the runbook into the given location and call to Get-Content will load the content in the memory.

Which one to use when?

Now, as you can see, you are using Get-Content in both the ways. However, the first method looks a little bit hackish as the IF block is never going to be entered into. So if you need to load contents of multiple runbooks, then I’d suggest using the second method instead as the code looks much cleaner and clear.

Another point to keep in mind is that all AzureRmAutomation cmdlets need to authenticate to Azure RM first which is an extra step and might not be convenient always. However, if you’re going to authenticate elsewhere for other reasons anyway, we can use the Export-AzureRmAutomationRunbook cmdlet without any concerns.

So finally, it’s up to you to decide which cmdlet to use depending on whether you’re ok to make your code look a bit hackish or not. Personally, I prefer using Export-AzureRmAutomationRunbook as it more clearer and shows exactly what we intend to do to the person looking at our code.

Hope this helps!

Advertisements

[PowerShell Tip] – Assign array elements to separate variables in single line of code

Hi,

In this post, we’ll see how we can assign array elements to separate variables in a single line of code

Consider a sample array

$TestArray = @("FirstElement", "SecondElement")

Now, to get the value of the elements into separate variables, we just need to declare the variables separated by a comma and at the end, assign the array to it as shown below.

$FirstValue, $SecondValue = $TestArray

When you print the $FirstValue and $SecondValue variables, you will see that they hold the values of the corresponding array element.

One use-case where we could use this is when we have a string that we need to split and we need to use let’s say all parts of the string.

Let’s say for example we have a string as given below

Amogh Natu;28;M;Hyderabad;India

We could read the whole string first and then split it using the character “;” and assign the resultant array to 5 variables namely, “FullName”, “Age”, “Sex”, “City”, “Country” and use these variables later in the code as needed.

Although, one point to remember here is that this is feasible and advisable to use when we know the array is going to have limited number of elements. It doesn’t make sense to use this technique if we know that the array has a lot of elements or is dynamic.

Hope this helps!

[PowerShell Tip] – Prevent cmdlet from printing anything to output

Hi,

In this short post, we’ll see how we can prevent a PowerShell cmdlet from printing anything to the std output stream. There are two ways you can do this.

  1. Piping the output of the cmdlet to Out-Null
    e.g. Set-AzureRmContext -SubscriptionId "SubId" | Out-Null
  2. Assigning the output of the cmdlet to $null
    e.g. $null = Set-AzureRmContext -SubscriptionId "SubId"

Either of these would prevent the output being printed to the output stream.

P.S. I learned this today and would love to know if there are more ways to achieve the same 🙂

Hope this helps!

Analyzing PowerShell scripts with PSScriptAnalyzer

Hi,

This post will show you how you can use PSScriptAnalyzer to analyze whether your PowerShell scripts or functions confirm with industry best practices or not.

PSScriptAnalyzer (PSSA going forward) is a static code analyzer that checks your PowerShell scripts, modules, functions and gives a detailed report on any rule/rules that the scripts or modules are not conforming to.

PSSA is a tool developed by Microsoft that can be downloaded and installed from the PowerShell gallery using the

Install-Module -Name PSScriptAnalyzer

cmdlet. As of this writing, there are 51 rules that have been created as per the best practices being followed in the industry for PowerShell scripts. You can view these rules using the Get-ScriptAnalyzerRule cmdlet as shown below. You can use Out-GridView for seeing the rules more clearly. Note that, this cmdlet is available only after you install the PSScriptAnalyzer Module.

Get-ScriptAnalyzerRule | Out-GridView

You will see the output as shown below.

1

All these rules will be validated against the file(s) that you want to analyze.

To analyze a file or set of files, you can use the

Invoke-ScriptAnalyzer -Path [<Path(s)_to_Script>] | Out-GridView

cmdlet. I prefer to use the Out-GridView just for getting the output in a more clear way. You can choose to include or exclude it. You will see the output of a sample script as shown below.

2

As you can see, the analyzer gives out a clear report about all the violations currently present in my script with a detailed message about the issue and what I can do to resolve it. The report also shows the severity of the violation and line number in the script.

We can analyze multiple scripts at the same time by passing a folder path to the Invoke-ScriptAnalyzer cmdlet instead of the path of a single script.

Invoke-ScriptAnalyzer -Path "D:\" -Recurse

The -Recurse flag instructs the cmdlet to check and analyze scripts in sub-folders as well. You can see the complete output of all the scripts as shown below.

3

You can also check your scripts for a particular rule only by using the -IncludeRule parameter in the Invoke-ScriptAnalyzer cmdlet. Or you can exclude certain rules by using the -ExcludeRule parameter and passing the set of rule names to be excluded.

Invoke-ScriptAnalyzer -Path "D:\SampleScript.ps1" -IncludeRule "PSAvoidUsingWriteHost"

This would cause the script to be checked for only the PSAvoidUsingWriteHost rule. The -IncludeRule parameter accepts a string array so you can pass multiple rules to include. Separate multiple rules by a comma.

Similarly, you can pass one or more rules to exclude using the -ExcludeRule parameter which would cause the Invoke-ScriptAnalyzer cmdlet to ignore those rules.

You can also create your own Custom Rules module (it will be a .psm1 file) and specify the Invoke-ScriptAnalyzer cmdlet to use those.

You need to use the -CustomizedRulePath parameter which accepts a string array as value so you can pass one or more custom rule files. The structure of the custom rule file and how to create them is out of scope for this post. You can refer to this for details on creating custom rule file.

There are some more parameters to the Invoke-ScriptAnalyzer cmdlet like -Severity which lets us specify which severity specific rules to validate and -LoggerPath which can be used to specify paths to custom logger assemblies.

Hope this helps!

[Powershell-Basics] – Looping through hash table

Hi,

This post will show how to loop through a hash table in PowerShell.

Let’s say for example we have a hashTable object in PowerShell as shown below:

$SampleTable = @{}   # Syntax for creating hashtable --> @{}
$SampleTable."Name" = "Amogh"
$SampleTable."City" = "Hyderabad"
$SampleTable."Country" = "India"

For looping through the hashtable, we can use the method GetEnumerator() method of the Hashtable class. This provides an enumerator that can be used in the Foreach loop. And we can use the Key and Value property of the hash table entry.

Foreach($Entry in $SampleTable.GetEnumerator())
{
        Write-Output "$Entry.Key ------ $Entry.Value"
}

The output of the above snippet will be as follows:

Name ------ Amogh
City ------ Hyderabad
Country ------ India

Hope this helps!

Save with Encoding in Visual Studio

Hi,

This post shows how you can save files while retaining their encoding and line endings format. For example, if you’re writing a shell script that will be run on a UNIX operating system, the script is supposed to have line endings as (LF) while Windows’ line endings usually follow (CR)(LF) convention.

So to save a file with its encoding, you need to click on Save As… and then click on the arrow beside the save button as shown below and click on Save with Encoding.

After selecting the file location, VS will show the encoding window where we can select the required encoding. Select the required encoding and click on save. This will save the file with the selected encoding.

 

Other editors like Notepad++, sublime editor also provide similar options for setting line endings. Personally, I prefer Notepad++. However, best option to prevent line endings to get changed by mistake when moving the script from Windows machine to Unix like System is to use the VI editor within the Unix system itself to avoid the /bin/bash^M: bad interpreter: error message.

Hope this helps!

Shell script with 10+ parameters? Remember this….

Hi,

This post is mainly aimed towards shell script newbies like myself and the goal is that they don’t end up wasting time on this as I had to.

So, if you are creating a new shell script that requires 10 or more parameters, you need to remember one thing that you can’t access the 10th and further parameters by just $10 or $11, etc.

If you simply put something like below,

...
TenthParameter=$10
EleventhParam=$11
echo $TenthParameter
echo $EleventhParameter
...

The output would actually look like below: (Assume First’s parameter’s value is “First”)

First0
First1

and NOT the actual values of the tenth and eleventh parameters.

This is because the bash interpreter first sees $1 (in the “$10”) and replaces its value immediately.

To get the value of parameters from and beyond 10th param, you need to put them in Curly {} braces as shown below

...    // # WORKS!
...
TenthParameter=${10}
EleventhParameter=${11}
echo $TenthParameter
echo $EleventhParameter
...

This would print the actual values of the parameters.

Hope this helps!