Renew Certificate in Provider Hosted Apps Scenario

With a certain customer of mine I recently had an issue, where in the span of a month all of the certificates for the Provider Hosted Apps Domain (PHA) had to be renewed for four staging environments (including PROD).

I was lucky to be the successor of somebody who made the same mistake as the service provider a month later, so I was prepared and could save the day. In hopes of saving you the time it took for me to force the service provider’s hand (around 10 hours telco time) I want to give you a brief overview of how to tackle this, the full list of reference articles and a script to set the sharepoint part (trust).

First a short introduction.

Certificates. Certificates are often used to encrypt data communication between machines. This is done to make sure that two parties can communicate without a third party listening. Also this is done to verify the identity of somebody initiating communication.

In the scenario of SharePoint and PHA we have two parties. We have the PHA Server Farm and the SharePoint Server Farm. Usually each farm consists of at least 2 servers for redundancy/ high availability reasons.

When HTTP communication is done via SSL each WebSite in IIS has a binding on port 443, which uses a certificate for encrypting the data he site responds with to requests.

Any admin can swap the certificate in IIS. All you need to do is check the certificate that exists and request a new certificate either self-signed, internally trusted or externally trusted with the correct SAN (Subject Alternative Name).

As an example, let’s suggest the following setup:
SharePoint has a wildcard certificate, like *.apps.mycompany.com. The PHA environment has a certificate corresponding to this in apps.mycompany.com. This may be the same certificate, if you request the big kahuna, i.e. a multi-san, wildcard certificate. Usually this is not the case, and is not necessary.

The PHA IIS will have the apps.mycompany.com certificate, and SharePoint will have the wildcard certificate. However how does SharePoint make sure, that PHAs are not added to different server and this server has different code and pretends to be the PHA server? There is a trust between these servers on the SharePoint side. In essence this article has one message: “Don’t forget this trust!”

The underlying process of replacing the apps.mycompany.com certificate is based on four easy steps, all of them are necessary:

  1. Replace the apps.mycompany.com certificate in the IIS of each PHA server

    This is a no-brainer. Request the certificate, get the response, use certmgr.msc to import the certificate into the Personal Store of the Machine Account. Make sure to have a private key for the certificate. This can be self-signed, internally trusted or externally trusted (depending on your scenario, if you externalize your farm or not).

  2. Export the apps.mycompany.com certificate as pfx (with private key)

    Export it with private key (and password) and put it into the location, where the web.config of each Provider Hosted App can access it. Usually this certificate is stored in a central location on each IIS PHA Server.

  3. Export the apps.mycompany.com certificate as cer (without private key)

    Export it without private key and put it into a location on a SharePoint server, where you can access it from the SharePoint Powershell script in the next step.

  4. Replace the SharePoint trust via script

    The certificate (cer) is referenced in two locations in SharePoint (SPTrustedRootAuthority, STSTrustedSecurityTokenIssuer). You can set it in the SPTrustedRootAuthority by updating the object and by deleting the STSTrustedSecurityTokenIssuer object and recreating this with the correct IssuerName and RegisteredIssuerName ([Issuer GUID]@[Realm]). See Script below.

EDIT: This image differs from the code below. A crucial parameter is missing. line 29 must have the flag “-IsTrustBroker” as seen below. I wrote a specific article on this topic here


param (
[string] $CertificateSubjectAlternativeName = "apps.mycompany.com"
, [string] $CertificatePathLocation = "[MyDrive]:\[MyPath]\apps.mycompany.com.cer"
)

asnp microsoft.sharepoint.powershell -ea 0

$certificate = $null;
$certificate = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($CertificatePathLocation);

if($certificate -ne $null) {
$tra = $null;
$tra = Get-SPTrustedRootAuthority | ? { $_.Certificate.Subject.Contains(${CertificateSubjectAlternativeName}) }

if( $tra -ne $null ) {
$tra.Certificate = $certificate;
$tra.Update();
} else {
Write-Host -ForegroundColor Red “Error: No Certificate with SAN ‘${CertificateSubjectAlternativeName}’ found in Root Authority Store.”;
}

$sci = $null;
$sci = Get-SPTrustedSecurityTokenIssuer | ? { $_.SigningCertificate.Subject.Contains(${CertificateSubjectAlternativeName}) }

if( $sci -ne $null ) {
$regIssuerName = $sci.RegisteredIssuerName;
$issuerName = $sci.DisplayName;
$sci.Delete();
New-SPTrustedSecurityTokenIssuer -Name “${issuerName}” -RegisteredIssuerName “${regIssuerName}” -Certificate $certificate -IsTrustBroker;
} else {
Write-Host -ForegroundColor Red “Error: No Certificate with SAN ‘${CertificateSubjectAlternativeName}’ found in Trusted Security Token Issuer Store.”;
}
} else {
Write-Host -ForegroundColor Red “Error: Certificate not found at location ‘${CertificatePathLocation}’.”;
}

The last step, which is not mandatory, but we had to do it was on the IIS Servers of the PHA environment. The certificate gets cached by the UserProfile of the User running the app pool. Thus once you replace it is no longer able to find the file. This will be broadcasted by an ugly error like: ‘CryptographicException: The system cannot find the file specified.’

This is how to fix that: open IIS –> ApplicationPools –> DefaultAppPool –> “Right Click” –> Advanced Settings –> Load User Profile | set this value to “true”.

It seems a bit absurd to change this setting since it did not have to be set when configuring the PHA connection in the first place, but it does the trick.

Links:

Read more of this post

Send A SOAP Message to Nintex Workflow WebService – DeleteWorkflow

Yesterday I was challenged to develop a script that deletes a list workflow on 105 sites and publish it with a new name.

There is a bug within Nintex, where when you copy a site collection the GUIDs of the workflow, the list and the web are the same as in the source site. This confuses Nintex sometimes, in this case regarding conditional start. The conditional start adds an event receiver to the list and the workflow itself is synchronous, so when saving a form this takes a couple of seconds to close because the form waits for the workflow to finish. Even if the workflow is small, this will always take longer than the user expects, so we changed the start condition to always run on change, but used the condition action as first action in the workflow, so the workflow always starts (asynchronously), but ends right away if the condition is not met. So we buy performance by getting more historic Nintex Data.

So back to the task. The publishing of a workflow can be done with NWAdmin, which was my obvious choice to team up with PowerShell to run through the sites of my webapplication and to pulish the workflow. Only publishing the workflow does not help, as the GUID stays the same. We need to decouple the workflow from its history. This can be done by publishing it with a new name (Nintex Support).

The NWAdmin Tool however does not provide a method to delete a workflow. I then looked into the dreaded “using the ie-process as com.application” but the page where you can manage a workflow is really irritating from a DOM-perspective. Also the url click event triggers a javascript method with a confirm-window.

function DeleteWorkflow(sListId, sWorkflowId, sWorkflowType, bPublished) {
    if (bPublished) {
        if (!confirm(MainScript_DeleteWfConfirm))
            return;
    }
    else if ((!bPublished) && typeof (bPublished) != "undefined") {
        if (!confirm(MainScript_DeleteUnpublishedWfConfirm))
            return;
    }
    else {
        // orphaned workflows
        if (!confirm(MainScript_DeleteOrphanedWfConfirm))
            return;
    }
    ShowProgressDiv(MainScript_DeletingWfProgress);
    deletedWorkflowID = sWorkflowId;
    var oParameterNames = new Array("listId", "workflowId", "workflowType");
    if (sListId == "") {
        sListId = "{00000000-0000-0000-0000-000000000000}";
    }
    var oParameterValues = new Array(sListId, sWorkflowId, sWorkflowType);
    var callBack = function () {
        if (objHttp.readyState == 4) {
            if (CheckServerResponseIsOk()) {
                //delete the table row's for this workflow
                var tableRows = document.getElementsByTagName("TR");
                for (var i = tableRows.length - 1; i > -1; i--) {
                    if (tableRows[i].getAttribute("WfId") == deletedWorkflowID) {
                        tableRows[i].parentNode.removeChild(tableRows[i]);
                    }
                }
                SetProgressDivComplete(MainScript_WfDeleteComplete);
            }
        }
    }
    InvokeWebServiceWithCallback(sSLWorkflowWSPath, sSLWorkflowWSNamespace, "DeleteWorkflow", oParameterNames, oParameterValues, callBack);
}

As you can see there is an if-clause which sends a confirm-window in any case. So I could not use this method. But thankfully I found the last line
InvokeWebServiceWithCallback(sSLWorkflowWSPath, sSLWorkflowWSNamespace, “DeleteWorkflow”, oParameterNames, oParameterValues, callBack);

That took me on the right track.

I looked into the method, but that was the less efficient way of approaching the problem. The link to the webservice would have gotten me further (/_vti_bin/NintexWorkflow/Workflow.asmx?op=DeleteWorkflow).

img1

function InvokeWebServiceWithCallback(sServiceUrl, sServiceNamespace, sMethodName, oParameters, oParameterValues, fCallBack) {
    if (objHttp == null)
        objHttp = createXMLHttp();

    oTargetDiv = null; // prevents the onstatechange code from doing anything


    // Create the SOAP Envelope
    var strEnvelope = "" +
                "" +
                    "" +
                    "" +
                "" +
               "";

    var objXmlDoc = CreateXmlDoc(strEnvelope);

    // add the parameters
    if (oParameters != null && oParameterValues != null) {
        for (var i = 0; i < oParameters.length; i++) {
            var node = objXmlDoc.createNode(1, oParameters[i], sServiceNamespace);
            node.text = oParameterValues[i];
            objXmlDoc.selectSingleNode("/soap:Envelope/soap:Body/" + sMethodName).appendChild(node);
        }
    }

    var objXmlDocXml = null;
    if (typeof (objXmlDoc.xml) != "undefined")
        objXmlDocXml = objXmlDoc.xml; // IE
    else
        objXmlDocXml = (new XMLSerializer()).serializeToString(objXmlDoc); // Firefox, mozilla, opera

    objHttp.open("POST", sServiceUrl, true);
    objHttp.onreadystatechange = fCallBack;
    objHttp.setRequestHeader("Content-Type", "text/xml; charset=utf-8");
    objHttp.setRequestHeader("Content-Length", objXmlDocXml.length);
    if (sServiceNamespace.charAt(sServiceNamespace.length - 1) == "/")
        objHttp.setRequestHeader("SOAPAction", sServiceNamespace + sMethodName);
    else
        objHttp.setRequestHeader("SOAPAction", sServiceNamespace + "/" + sMethodName);
    objHttp.send(objXmlDocXml);
}

In any case I developed the script to run the delete workflow method via soap and that’s what I want to share with you below.

The script deletes exactly one workflow on a list in a given web based on the id. The ID of the Workflow can be retrieved from the nintex configuration database.

SELECT workflowid, workflowname
  FROM [Nintex_Config].[dbo].[PublishedWorkflows]
  where workflowname = '[Workflow A]'
  group by workflowid, workflowname

For those of you who panic when seeing/ reading SQL, you can also get the ID from the page (the link) itself, but that kind of defeats the purpose of automating the task of deletion, because you would need to go to every management page to get all ids…but I guess anybody still reading this is not panicking yet…

btw the export-workflows nwadmin command does not give you the ids of the workflows…

but if you want to get the ids in a different way you can use the following powershell:

$w = get-spweb "[WebUrl]";
$l = $w.lists["[ListTitle]"];
$l.WorkflowAssociations | select baseid, id, name
$w.Dispose();

The ID you want to use is the baseid.

Back to the SOAP Script…

I am sending the request with the default credentials…this may be something you will want to check. Check out the System.Net.NetworkCredential type, if you want to add a dedicated user to run the call with. Don’t forget the security implications… 😉

The issue I had was, that I forgot the xml header, starting with a different content-type and the real big issue: I forgot to set the action in the header. That’s the critical point. If you don’t do that you will get a 200 HTTP Response Code, but nothing will happen. After a couple of hours I was satisfied with my result. Here it is…

param (
    [string] $WebUrl = "[MyUrl]",
    [string] $ListTitle = "[MyListTitle]",
    [string] $WorkflowId = "[GUID of Workflow without parentheses]"
)


asnp microsoft.sharepoint.powershell -ea 0;

$spweb = get-spweb "$Weburl";
$splist = $spweb.lists | ? { $_.Title -eq "$ListTitle" -or $_.RootFolder.Name -eq "$ListTitle" }
$splistid = $splist.id.toString("B");

$WebServiceBase = $WebUrl;
$WebServiceMethod = "_vti_bin/NintexWorkflow/Workflow.asmx";
$Method = "POST";
$ContentType = "text/xml; charset=utf-8";

$soapEnvelope = "" +
                "" +
                    "" +
                        "" + $splistid + "" +
                        "{" + $workflowid + "}" +
                        "List" +
                    "" +
                "" +
                "";

$req = [system.Net.HttpWebRequest]::Create("$WebServiceBase/$WebServiceMethod");
$req.Method = $method;
$req.ContentType = "text/xml; charset=utf-8";
$req.MaximumAutomaticRedirections = 4;
#$req.PreAuthenticate = $true;

$req.Credentials = [System.Net.CredentialCache]::DefaultCredentials;

$req.Headers.Add("SOAPAction", "http://nintex.com/DeleteWorkflow");
$encoding = new-object System.Text.UTF8Encoding
$byte1 = $encoding.GetBytes($soapEnvelope);

$req.ContentLength = $byte1.length;
$byte1.Length;
$newStream = $req.GetRequestStream();

$newStream.Write($byte1, 0, $byte1.Length);

$res = $null;
$res = $req.getresponse();
$stat = $res.statuscode;
$desc = $res.statusdescription;
        
$stat
$desc
$res

Recreate Office Web Apps // Proxy

Long time, no blog. Lots to do, and worth blogging about, but I just cannot find the time. Hopefully after March I will.

Recently at a customer I had to recreate the office web apps farm. As I have never done that before I tried naively:
Install the certificate, set the correct URLs on the server and recreate the SPWopiBindings.

Well there was a Proxy in my way, and the URL I wanted to use (spofficewebapps.customer.tld) was not in the list of exceptions.

So it didn’t work (adding the spwopibinding).


Office Web Apps Server:

$dns = "spofficewebapps.customer.tld"
set-location Cert:\LocalMachine\My
$cert = gci | ? { $_.DnsNameList.Unicode -eq $dns } | select -First 1;
$cert.FriendlyName = $dns
Set-OfficeWebAppsFarm -InternalURL "https://$dns" -ExternalUrl "https://$dns" -CertificateName "$dns"

SharePoint Server:

Remove-SPWopiBinding -All:$true -Confirm:$false
New-SPWopiBinding -ServerName "spofficewebapps.customer.tld"

What I got is that the Server was not available. Like this:
But my certificate was there, I could reach the https://spofficewebapps.customer.tld/hosting/discovery/ just fine and so none of the results from Google fit my bill.

What now? Well here is the list of remedies:
– Add the new URL to the list of Proxy exceptions
– Do not use Set-OfficeWebAppsFarm, but rather destroy and create (see below)
– Restart all servers involved

Then another thing: My servers aren’t getting the Proxy exceptions pushed. So I had to add them to Internet Explorer manually.

Good Code on Office Web Apps:

Remove-OfficeWebAppsMachine
$dns = "spofficewebapps.customer.tld"
New-OfficeWebAppsFarm -InternalUrl "https://$dns" -CertificateName "$dns" -EditingEnabled -LogLocation "D:\OWA-LOGS" -RenderingLocalCacheLocation "D:\OWA-CACHE"

So after all that I was finally able to add the office web apps back. By the way a host file entry on the SharePoint Server to the Office web apps Server DID NOT HELP.

Static IP? No thanks, i’ve got ftp!

So yes, there is a bit of a logical issue in the title. If I have ftp, I already have a static ip of course, which is connected to the servername, but maybe I don’t want that static ip, I want it for a different purpose and it costs me 15 EUR/ month to get it via my Internet Provider. I could start with using a service that can tunnel my requests via a static IP to my dynamic one, but I will have to register with somebody.

I thought, why can I not do the following? Trigger a timer job on my home machine, get the IP Address and store it in a file. This file I could either push via a service like dropbox (but I don’t want dropbox on my server) or I can use ftp.

I took the code from this site.

Here it is:


function UploadFTP {
param(
[string] $user,
[string] $url,
[string] $port,
[string] $pass,
[string] $localPath,
[string] $remotePath
)

# create the FtpWebRequest and configure it
$ftp = [System.Net.FtpWebRequest]::Create("ftp://" + $url + ":" + $port + "/" + $remotePath);
$ftp = [System.Net.FtpWebRequest]$ftp
$ftp.Method = [System.Net.WebRequestMethods+Ftp]::UploadFile
$ftp.Credentials = new-object System.Net.NetworkCredential($user,$pass);
$ftp.UseBinary = $true
$ftp.UsePassive = $true
# read in the file to upload as a byte array
$content = [System.IO.File]::ReadAllBytes($localPath);
$ftp.ContentLength = $content.Length
# get the request stream, and write the bytes into it
$rs = $ftp.GetRequestStream()
$rs.Write($content, 0, $content.Length)
# be sure to clean up after ourselves
$rs.Close()
$rs.Dispose()
}

function DownloadFTP {
param(
[string] $user,
[string] $url,
[string] $port,
[string] $pass,
[string] $downloadPath,
[string] $remotePath
)
# Create a FTPWebRequest
$FTPRequest = [System.Net.FtpWebRequest]::Create("ftp://" + $url + ":" + $port + "/" + $remotePath);
$FTPRequest.Credentials = New-Object System.Net.NetworkCredential($user,$pass)
$FTPRequest.Method = [System.Net.WebRequestMethods+Ftp]::DownloadFile
$FTPRequest.UseBinary = $true
$FTPRequest.KeepAlive = $false

# Send the ftp request
$FTPResponse = $FTPRequest.GetResponse()
# Get a download stream from the server response
$ResponseStream = $FTPResponse.GetResponseStream()
# Create the target file on the local system and the download buffer
$LocalFile = New-Object IO.FileStream ($downloadPath,[IO.FileMode]::Create)
[byte[]]$ReadBuffer = New-Object byte[] 1024
# Loop through the download
do {
$ReadLength = $ResponseStream.Read($ReadBuffer,0,1024)
$LocalFile.Write($ReadBuffer,0,$ReadLength)
}
while ($ReadLength -ne 0)

$LocalFile.Close();
$LocalFile.Dispose();
}

$user = "someusername"
$url = "some.ftp.server"
$port = "21";
$pass = "somepassword";
$localPath = "C:\tmp\myfile.txt";
$downloadPath = "C:\tmp\myfiledown.txt";
$remotePath = "myuploadedfile.txt";

$ip = Get-NetIPAddress | ? { $_.AddressFamily -eq "IPv4" -and $_.InterfaceAlias -eq "Ethernet"}
$ip.IPv4Address > $localPath;

UploadFTP $user $url $port $pass $localPath $remotePath
DownloadFTP $user $url $port $pass $downloadPath $remotePath

So what I am doing is defining my variables, writing my IP to my localpath and uploading that file as well as downloading it. So my PoC was with one machine. The expectation is that the downloaded file and the original file are the same. Which is true.

The eventual setup will look a bit different because I will have to get at the public ip as well as setup the job which will then upload the file. On the other side I will need the part of the script, that downloads the file.

So my use case is I want to connect to a server connected to the internet, but I don’t know the IP, because it is dynamic/ DHCP.

AppManagement and SubscriptionSettings Services, Multiple Web Applications and SSL

So currently I am setting up four environments of which one is production, 2 are staging and another is was a playground installation.

My staging environments (TEST, QA, PROD) are multi-server, multi-farm systems (multi-farm because the 2013 Farm publishes Search and UPA to existing 2010 Farms).
They are running SPS2013 Standard with March PU 2013 + June CU 2013. They will be using App Pool Isolation and App Management and SubscriptSettings Services have their own account (svc_sp{t, q, p}_app, i.e. svc_spt_app, svc_spq_app and svc_spp_app).

I have three web applications of which all are secured by SSL making a wildcard certificate necessary for the app domain. Each has their own account (svc_sp{t, q, p}_{col, tws, upa}). The reason for this is that I will be using Kerberos Authentication and for the SPNs I need dedicated accounts for each Application URL.

My playground was once a 4 server farm, but now 3 servers have been removed. It does not run the March PU 2013 nor June CU 2013. There app pool isolation wihtout SSL is used.

On the playground the app management worked well. I actually encountered my problem on my test first and tried to replicate on the playground, but couldn’t. But I am getting ahead of myself. The system was setup via autospinstaller and the necessary certificates and IPs involved were requested and implemented. The AD Team did the domain setup for me. I didn’t setup my environment following this article, but it is a good one to read. I also got the idea of creating a separate dummy web application for attaching my IIS Bindings and Certificate from it, which makes a lot of sense, because of security considerations and kerberos.

The first article to read to get an overview of what is necessary and what’s trying to be achieved can be found here.

So I set up everything and still it wasn’t working. What does that mean? I will explain. When I subscribe to an app, download it from the store and add it in a site collection of my choosing I get to click on it once it is finished installing. The link then leads me to my app domain. With SSL only when I was using the same application pools I could actually get anywhere, otherwise I say the below.

This is what I wanted to see:
Expected

This is what I saw on any of the web applications with SSL and that had a different app pool account than the one I was using for my dummy web application.
Actual

So this blank page is actually exactly what you see when you leave the request management service running on the frontends without doing any topology configuration.

So I tried to work with the user policy from the web application management page in hopes of giving the users permissions on the content databases. This was actually not happening as I found out later, but which was actually exactly what was needed. I had to manually add the account of the app pool for the app domain to the SPDataAccess Group of the content databases. Then it also works with SSL. I actually set up three web applications WITHOUT SSL on the Test Staging Environment with the same users as the SSL Web Applications and this worked like a charm, but for any SSL web application I needed to explicitly give permissions to the content database. This is a nightmare to maintain. For my migration of 20 databases from 2010 to 2013 I need to do this again and again and for each new content database I will create in the future. Just imagine you create a new content database and forget to do this. Now for any site collection in this content database the above issue will show up. Hard to debug later on.

Not sure what Microsoft is thinking here, but I am happy that it only took me 4 days to figure this one out.

Automation of Web Application Creation in Managed Environments (Part VII: Edit registry and host file)

This is the seventh and last article of this series. It is related to another article I already posted on the subject. This article tries to automate what was achieved in that article, i.e. edit the registry entry for BackConnectionHostNames to contain all hosted Hostnames as well as update the Local Host-File. Why would you do that? Because if you don’t then you will have a problem with calls from the server calling resources on the same server running into the loopback issue.

In essence the loopback issue is caused by a (valid!) security measure where the server should not call any resources on the system as this can mean a security loop hole. Webservers and especially webservers hosting webservices are a special case, thus you need to configure for this case and that is what you do by registering the allowed host names in the host-file and the registry.

So let’s get into the BackConnectionHostNames Registry Edit.

$path = "HKLM:\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\BackConnectionHostNames"
$root_path = "HKLM:\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0"
$name = "BackConnectionHostNames"

$value = @"
some.hostname.extension
someother.hostname.extension
athird.hostname.extension"@

if (Test-Path $path) {  
    Write ($path + " exists!")  
} else {  
    Write ($path + " needs to be created!") 
    New-ItemProperty -Path $root_path -Name $name -PropertyType MultiString -Value $value
} 

So this one is pretty easy. You can improve it by getting the multi-string content from a text file or something silly like that. So I have 3 variables at the top and these were the early days, you can see that because I omitted the semi-colon at the end of the line, which I usually do these days (except for function calls). Also the two last variables could be used to create the first, so it’s redundant, but that’s not even going to change the semantics of the script. The variable ‘value’ basically contains lines where each line represents a hostname.

Next the script checks if the path to the item property exists. It makes sure not to overwrite it, when a value exists. You could easily change this by adding the following line

Set-ItemProperty -Path $root_path -Name $name -Value $value -confirm:$false

after

Write ($path + " exists!")

The only time you will get a problem is when there already is a property with this name and this property does not have the type of a multi-valued string.

If the property does not exist it gets created with the path, name, type and value given. It’s almost as if you create a directory with PowerShell which makes sense because you can also walk through the registry the same way you do a directory of files and folders.

Now that we have set up the registry we can choose to hit “Windows + R” for “Run” and type ‘regedit’ then you navigate through the registry editor to the right place…hklm\system\CurrentControlSet\Control\Lsa\MSV1_0\ and then check the backconnectionhostnames value. It should be fine with the values given above.

Okay, the next step is getting the host file in order. You can do this manually. Honestly this is basically nothing but an ip address and a hostname separated by a tab or two at the most. Find the script below.

$inpath = ([string] (Split-Path -parent $MyInvocation.MyCommand.Definition) + "\hosts.txt")
$definitions = ([string] (Split-Path -parent $MyInvocation.MyCommand.Definition) + "\hosts_definitions.txt")

$outpath = ([string] (Split-Path -parent $MyInvocation.MyCommand.Definition) + "\hosts")

$content = get-content $inpath;

$content > $outpath 

import-csv $definitions -delimiter ';' | % {
    Write ("    " + $_.IP + "    " + $_.HostName) >> $outpath;
    Write ("    " + $_.IP + "    " + $_.HostName);
}
    
if (Test-Path $outpath) {
    copy-item $outpath "C:\windows\system32\drivers\etc\hosts" -confirm:$false -force
    Write ("Copied host-file");
    remove-item $outpath
}

The hosts.txt and the hosts_definitions.txt need to be in the directory the script is run in ($MyInvocation.MyCommand.Definition). The hosts file (without the .txt) will be copied to the same directoy and then later on moved to the correct directory. For each of the lines in the input file you can add the IP and the hostname with 4 spaces in between. Bam! Finito! Your host file is finished and ready to be sent to the correct folder. The correct folder in this case is the drivers\etc folder of the system directory. You move it and you delete the source. There you go. That’s it.

This is what the input file looks like:

IP;HostName
192.168.1.3;some.hostname.extension
192.168.1.4;someother.hostname.extension
192.168.1.5;athird.hostname.extension

and just for completeness, here is what the host file looks like when delivered with a fresh windows installation:

# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
#      102.54.94.97     rhino.acme.com          # source server
#       38.25.63.10     x.acme.com              # x client host

# localhost name resolution is handled within DNS itself.
#	127.0.0.1       localhost
#	::1             localhost

So now you have your registry and host files set up to allow the loopback check done in the right way! The only thing is that you need to make this re-runnable in some fashion or at least be able to add new host names in the future as we will see can be done with the scripts in all other parts. So using the set-propertyitem command might be a really good idea.

Additionally I want to touch on the topic of disallowing SSL 2.0 for IIS on Windows Server and automating that as well. This topic is described in another very detailed knowledge base article, so I will only briefly give you an overview of why you would want to do this. It fits well with editing the registry for the backconnectionhostname entry, because that uses the same mechanics but isn’t quite as long/ complex.

Basically this belongs into the context of security and server hardening. You will want to disallow the use of certain certificates that IIS may allow but you do not. At one of my customers a security department basically made this a requirement for our servers. So doing this on every server (because any SharePoint Prerequisite Install configures the WebServer role) of the farm is quite tedious.

The problem with these certificate types (cryptos) is that they have names containing slashes (‘/’) in them, e.g. RC2 128/128. The slash is used as a path separator in the context of powershell, so all the blogs tell you: you need to use the C# API. Okay so in my first version of the script I actually just added the first part (“RC2 128”) and manually edited the entries later (add “/128”). Yuck! So today I finally made this work in a sensible fashion. Finally I as the administrator no longer care about which cryptos are actually disabled and can only run the script and be done with this. As it’s actually fast as scripting should be it’s a couple of script executions and less than a minute for a farm. Nice!

So how did I go about this? I found this RegistryKey Class on msdn but I didn’t know how to get an object of the type [Microsoft.Win32.RegistryKey], because when do you actually have a Handle (basically what I was trying to get in the first place, so trying to find a handle is the same problem as before or an IntPtr (Int Pointer)) so I had to scan through multiple google results before finding something about RegistryKey Object and OpenSubKey. That’s where I found the constructor, which is basically a special form of the path (…prefixed “Registry::”), so that was the last missing piece to script the below:

function Get-LocalMachineRegistryKeyForEdit([string] $registryPath, [string] $SubKey) {
    $fullParentPath = "Registry::HKLM\" + $registryPath;
    if ( Test-Path $fullParentPath ) {
        $reg = [Microsoft.Win32.Registry]::LocalMachine.OpenSubKey($registryPath);
        $reg.OpenSubKey($SubKey, $true);
    }
}

function New-SubKey([Microsoft.Win32.RegistryKey] $key, [string] $subkey) {
    $fullParentPath = "Registry::" + $key.Name;
    if ( Test-Path $fullParentPath ) {
        $fullChildPath = $fullParentPath + "\" + $subkey;
        if ( -not (Test-Path $fullChildPath) ) {
            $newsubkey = $key.CreateSubKey($subkey);
        }  
    }
}

function New-PathDWORD([Microsoft.Win32.RegistryKey] $key, [string] $subkey, [string] $valueType, [string] $valueName, [string] $value) {
    $fullParentPath = "Registry::" + $key.Name;
    if ( Test-Path $fullParentPath ) {
        $fullChildPath = $fullParentPath + "\" + $subkey;
        if ( -not (Test-Path $fullChildPath) ) {
            $newsubkey = $key.CreateSubKey($subkey);
        }
        $k = Get-Item -Path ($fullChildPath)  

        if($k -ne $null) {
            $kProp = Get-ItemProperty -Path $fullChildPath -Name $valueName -erroraction silentlycontinue
            if($kProp -eq $null) {
                New-ItemProperty -Path $fullChildPath -Name $valueName -PropertyType $valueType -Value $value
            }
        }   
    }
}

# Registry::HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL
$schannelPath = "SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL";

# Registry::HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers
$ciphers = "Ciphers";

$cip = Get-LocalMachineRegistryKeyForEdit $schannelPath $ciphers

#enabled == 0
Write "SCHANNEL SECTION - Enabled=0";

New-SubKey $cip "NULL" "DWORD" "Enabled" "0"

New-PathDWORD $cip "RC2 40/128" "DWORD" "Enabled" "0"
New-PathDWORD $cip "RC2 56/128" "DWORD" "Enabled" "0"
New-PathDWORD $cip "RC4 40/128" "DWORD" "Enabled" "0"
New-PathDWORD $cip "RC4 56/128" "DWORD" "Enabled" "0"
New-PathDWORD $cip "RC4 64/128" "DWORD" "Enabled" "0"
New-PathDWORD $cip "RC2 128/128" "DWORD" "Enabled" "0"
New-PathDWORD $cip "RC4 128/128" "DWORD" "Enabled" "0"

#enabled == 1
Write "SCHANNEL SECTION - Enabled=1";

New-PathDWORD $cip "DES 56/56" "DWORD" "Enabled" "0xffffffff";
New-PathDWORD $cip "Triple DES 168/168" "DWORD" "Enabled" "0xffffffff"; # Triple DES 168/168

# ------
# Hashes

$hashes = "Hashes";
$hash = Get-LocalMachineRegistryKeyForEdit $schannelPath $hashes

Write "HASHES SECTION - Enabled=0";

#enabled == 0
New-PathDWORD $hash "MD5" "DWORD" "Enabled" "0"; # MD5

Write "HASHES SECTION - Enabled=1";

#enabled == 1
New-PathDWORD $hash "SHA" "DWORD" "Enabled" "0xffffffff"; # SHA


Write "ALGO SECTION";

$keyexchalgo = "KeyExchangeAlgorithms";
$algo = Get-LocalMachineRegistryKeyForEdit $schannelPath $keyexchalgo

New-PathDWORD $algo "PKCS" "DWORD" "Enabled" "0xffffffff"; # PKCS


Write "PROTOCOLS SECTION";

$protocolsPath = "SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols";
$protocols = "Protocols";

$prot = Get-LocalMachineRegistryKeyForEdit $schannelPath $protocols

$pct10 = "PCT 1.0";
New-SubKey $prot $pct10;  #PCT 1.0

$ssl20 = "SSL 2.0";
New-SubKey $prot $ssl20;  #PCT 1.0

$pct = Get-LocalMachineRegistryKeyForEdit $protocolsPath $pct10
New-PathDWORD $pct "Server" "DWORD" "Enabled" "0"; 

$ssl = Get-LocalMachineRegistryKeyForEdit $protocolsPath $ssl20
New-PathDWORD $ssl "Server" "DWORD" "Enabled" "0";

When all is run, this is what the registry looks like:

We basically have three methods that I will need to implement for the different use-cases I have.
I have the “give me a registry key for editing” use-case. So I get me that .NET object if it exists. Watch out that I am doing this in the HKLM (HKey Local Machine) context. I you want to do something outside of HKLM you need to change it accordingly.

So then I have my object I want to create DWORDs and Keys for. Well I need a method that does exactly that, because the first 15 or so steps will be “get me the key for editing and check if a key exists and add a DWORD enabling it or disabling it. Then I have a special case starting with the protocols section. There are actually sub-subkeys (or grandchildren keys if you so will). So we also need to be able to just create a path and then be able to perform or Path+DWORD operation again.

So the Path+DWORD is the heart of our script. Let’s look at it then. We make sure we don’t try to create something that already exists. So we have a parallel identification variable for the Test-Path method and the RegistryKey.OpenSubKey method. The subsequent OpenSubKey then opens the key for editing. I could have of course also implemented it, so that it directly opens the key I want in the first call of opensubkey and then use the flag for editing there. Might have been more slick, but how do I find my scripts on other blogs later if I don’t have any sillyness in them, right? 😉

So that’s a wrap.

I am sure you can optimize these things or maybe find easier solutions to these challenges I drew up here, but I think it was well worth writing this down as it took quite some time to find the solutions and the way I work I get inspiration for other problems I come across from reading these types of blogs.

I hope reading this article or all of them was worth your time and I would appreciate any feedback from your side if you are reading this.
I enjoyed writing this, so hopefully reading it was fun, too. My SharePoint journey isn’t over by far, so stay tuned for the next articles to come.

Back To Part VI
Back To Overview

Attachments:

Automation of Web Application Creation in Managed Environments (Part VI: Edit bindings for IPs and Certificates on the IIS Site)

For the sixth part of this series we have a special one. This might be the least stable one and you may need to change something to make it work. It is also one of these things when you finally get it done you click F5 and then it’s like “wow, sweet!”.

param (
    [string] $sitename,
    [string] $url,
    [string] $ip
)

import-module webadministration;

Function GetCertificate {
    param (
        [string] $url
    )

    $url = $url.ToUpper();
    $url = $url.Replace("DOMAIN.SOMEEXTENSION", "domain.someextension");

    $cert = Get-ChildItem cert:\localmachine\my | ? { $_.Subject.Contains("CN=$url")}

    $cert;
}

Function BindCertificate {
    param (
        [System.Security.Cryptography.X509Certificates.X509Certificate] $cert,
        [string] $ip
    )
    $certObj = Get-Item $cert.PSPath; 

    $item = get-Item IIS:SslBindings\${ip}!443;

    if($item -eq $null) {
        New-Item IIS:SslBindings\${ip}!443 -value $certObj;
    } else {
        Set-Item IIS:SslBindings\${ip}!443 -value $certObj;
    }
}

Function ReplaceWebsiteBinding {
    Param(
        [string] $sitename,
        [string] $oldBinding,
        [string] $newValue
    )

    $wsbindings = (Get-ItemProperty -Path "IIS:\Sites\$sitename" -Name Bindings);

    for($i=0;$i -lt ($wsbindings.Collection).length;$i++) {
        if((($wsbindings.Collection[$i]).bindingInformation).Contains($oldBinding)){
            ($wsbindings.Collection[$i]).bindingInformation = $newValue;
        }
    }
    Set-ItemProperty -Path "IIS:\Sites\$sitename" -Name Bindings -Value $wsbindings
}

$cert = GetCertificate $url;

if($cert -ne $null) {
    BindCertificate $cert $ip;
}

ReplaceWebsiteBinding $sitename "*:443:" "${ip}:443:";

So let’s check this script out. We have a sitename, we have an ip and we have the url of the website. We need the ip of course, because we wan’t to set that in the bindings and if you use https you need to anyway (that is if you don’t use a wild card certificate). The script does three things. It retrieves the certificate based on the url (the url needs to be in the subject of the certificate), it binds the certificate on the IP. So once you have the IP and the certificate bound you can replace the websitebindung (which is actually not yet associated with the sitename yet).

So you need to get the site via the sitename which is the path as well in the IIS:\ directory. It expects that you do not have a “real” binding, i.e. the way SharePoint creates it in Part II, plain and simple with “all unassigned:443”. If that’s not the case for you for any reason you need to edit the second of the three parameters. The third parameter represents what you have set in the second step: the correct binding.

So all the function does is iterate over all bindings of the site and edit the one you want to replace (the one given as second parameter). The last step is just replacing the old with the new. That’s it. IIS is set up. Off to the next step…

Continue Reading…Part VII

Back To Part V
Back To Overview

Attachments:

Automation of Web Application Creation in Managed Environments (Part V: Import certificates into cert store)

Before I begin I want to mention that this is one of the things that I have tried to automate at least 4 times in the last 2 years and I just failed again and again before I finally succeeded a couple of days ago. So this is currently my favorite script (yes, I have a favorite script, I am a nerd I know).

The reason why I had such a hard time was that there are a couple of methods described out there in the blogosphere on how to do this and none of them worked for me. Looking at what I was doing and thinking I was doing it wrong rather than expecting that there is a different way or additional effort of doing it took me quite some time to debug/ troubleshoot. On the other hand, if you don’t find anything via google these days, does it really exist? It obviously does (the other explanation would be that I suck at googling, but then I get lots of feedback that I am actually pretty good or at least above average, so that’s not it…)!

Looking a bit more into what Windows Server is bringing to the table in the last couple of weeks as I use Windows Server 2012 more heavily I investigated a bit more around the whole process of requesting and approving certificates and how Windows Server does this in terms of acting as a requester and approver.

So with this new knowledge I finally got it working with the certutil tool and it’s a pretty short script in the end as well.

But before I get to the script that actually does all the magic, why do I want to automate this in the first place?
If you ever counted the number of clicks you have to perform to manually add a certificate to the store you won’t ask…

  • open an mmc.msc via 'Windows + R' Shortcut or typing it in the Start>Search bar.
  • Add the certificates snap-in via File>Add/Remove>Select ‘Certificates’>Click Add>Select ‘Computer Account’>Next>Finish>OK
  • open personal>certificates in the tree and right click>All tasks>Import…>Next>Select File via Browse>Next>Next>Finish>OK

Counting every action here you are at 20 before you have your certificate where you want it and that’s without requesting it and provisioning it on your server. If you do this with 4-8 certificates on 6 servers you understand my pain. I am not lazy, I just hate doing something more than once. Imagine my frustration with these types of numbers.

So now you know why it makes sense to automate this. Why have it as it’s own step? Well the prerequisite of course is that the certificates are available from Part III and as this has nothing to do with SharePoint nor IIS it makes sense to separate it and also allow bulk import functionality.

The request and approve process at the customer I work with is separated between departments, so we use certreq.exe to request the certificate as described in Part III of this series.

In general if you want to troubleshoot something for the sake of efficiency you will want to reduce the dependencies to other persons and departments to be ideally zero. So what I did was use my development machine and Setup a Root Certificate Authority on a windows server 2008.

Then I requested certificates via UI with a local DC/ CA. This was after I found the article on how to import certificates using powershell. The code I reused is displayed below (for your convenience without the irritating question mark symbols).

function Import-509Certificate {
    param([String]$certPath,[String]$certRootStore,[String]$certStore)

    $pfx = new-object System.Security.Cryptography.X509Certificates.X509Certificate2;
    $pfx.import($certPath);
    
    $store = new-object System.Security.Cryptography.X509Certificates.X509Store($certStore, $certRootStore) 
    $store.open("MaxAllowed");
    $store.add($pfx);
    $store.close();
}

So now I can call the function I have with something like

Import-509Certificate -certPath "C:\temp\mycert.cer" -certRootStore LocalMachine -certStore My

This will get me my certificate “mycert” into the personal cert store of the localmachine, which is what I want, because only those certificates can later be selected by IIS for the bindings of my IIS Sites in Part VI of this series.

So now that I did this Import-509Certificate this is what my certificate looks like in the store.

No PrivKey 1

and the properties look like this:

No PrivKey 2

So actually it look more like this and please concentrate on the highlighted sections that show the private keys from the machine are available:

PrivKey 1

and the properties should look like this:

PrivKey 2

So with the upper two images I pretty much have nothing. This is where I started overthinking it the first 3-4 times and failed. I didn’t have a password so the pfx script above didn’t help me any and the different constructors and flags for the X509Certificate2 object didn’t help me either. I did this in a couple of variations, tried the API in hopes this was a special case I was using that was not working. No dice.

So fast forward to a couple of days back.

The solution is a lot easier and I came across my building blocks when I found this article.

Certutil or the Certificate Utility sounded to me very similar to the certreq or Certficate Request Tool I was using successfully to create my cert requests. So why not investigate more.

I checked the parameters for the repairstore operation which sounded pretty good to me: Repair key association or update certificate properties or key security descriptor

I googled for the command and I came up with exactly what I wanted: link.

This is the resulting script that works for me.

param(
    [string] $certPath
)

#certutil -addstore My
#certutil -repairstore My 

CertUtil -addstore My $certPath

set-location "cert:\LocalMachine\My";

Get-ChildItem | ? { $_.hasprivatekey -eq $false } | % { certutil -repairstore my $_.Thumbprint }

The input is the full path of the file (the certificate itself – also known as the approved request or the response from the CA).

The certificate utility or certutil is an exe that can be found in the system directory of windows ([system]:\windows\system32\) so it is registered in the powershell of your chosing. The operation addstore allows you to add a certificate to the store you define, which in my case is the personal store of the local machine. This is also known as ‘my’.

In the next step I can set the location which is something I found in a blog I cannot remember. So similar to your registry or IIS when you import the webadministration module you can use your cert store as a directory (which makes absolute sense by the way).

So I set the location to localmachine\my which is the same location I just added my certificate at.

Now comes the brainy part of the script. I get all the child elements, i.e. all the certificates from that store that do not have a private key and I use certutil again with the repairstore parameter and identify the certificate via its thumbprint. Awesome! I don’t even need to have any more parameters, because if there are any other certificates in this store that do not have a private key…hey just keep doing what your doing to all of them…can’t hurt at all.

So once this has run, you will achieve happiness, because your certificate will also appear with the little key-symbol in your cert store. If that’s not the case F5 (refresh) will do the trick.

This is what powershell looks like by the way…

Powershell Output

So that was a lot of text to explain a very short script. The other articles should have a better ratio in this regard. 😉

So…

Continue Reading…Part VI

Back To Part IV
Back To Overview

Attachments:

Automation of Web Application Creation in Managed Environments (Part II: Creation of WebAppliction via AutoSPInstaller (adapted))

To kick off the series with the first step of the Creation is the SharePoint part. All other parts are pretty much concerned with everything else (i.e. Registry, Host-File, IIS, Network Adapters, Certificates). Here we want to create a web application in SharePoint. To understand why all the other parts are interesting will be explained in each of the articles but let’s briefly touch on what we get with this article and what is missing and need to do and why.

The attached script takes an xml input and walks through the configuration file and creates a webapplication, configures it, adds managed paths, creates the underlying database, and can even create site collections if so specified. At this point a credit is in order to the great guys of the autospinstaller project on codeplex. I basically took their code and tailored it (maybe even worsened it a little 😉 ). I definitely made one change in line 265 because I usually delete my default page in my IIS because an IIS process that is not used bothers me for some reasons. If you are worried about the performance you can also just stop the site and stop the app pool. That works well, too! If you do that you can use the original script line, which picks the path where the virtual folder will be created based on the path of the default site.

So now that we covered where the script comes from (autospinstaller), let’s check out what is missing. Basically if you are running http on a custom port (anything above 1023 is a good choice if your servers are dedicated (please let them be dedicated!)) and you don’t even care about availability (dev and demo machines) you usually use a one server farm and thus you don’t necessarily need host names.

You can just use the server name. That would pan out to look like http://servername:1024 or http://localhost%5B:80%5D. Now if you are using host names you need to at least add these to your DNS Catalogue. This is usually not something a SharePoint Expert does and it’s basically so damn easy even a sales-guy can do it (as long as we are not talking reverse-lookup) 😉 This is actually a private joke between me and another Tech-Guy, so don’t worry if it’s not funny to you.

So if you add your host names to the DNS or have them added for you for that matter you should also consider updating the local host file as well as the backconnectionhostnames registry key.

For additional security you should definitely consider using https. If you are in an organization you are working with company data and documents, so hey keep them safe! Then you need certificates to identify your servers. Then you are packing a lot of problemos, hombre!

You might want to consider wildcard certificates. But if the org you work for is anything like my customers you can check that and just go with one IP per Frontend and webapplication of the farm. So this is basically where it gets interesting. If you get here and you still have the choice please also configure kerberos for added value. You can even have a pseudo single sign-on experience for Mac Users via Machine or User Certificates. SAML Claims (so yes, claims authentication) would be the greatest thing, but how many organizations are ready for that? In any case make sure you are allowing only the right kind of SSL Certificates on your server. Your IT Department may be happy if you disable the ugly ones before they have to ask.

If you are setting up SP2013 fresh, then please just do it right the first time. You can save yourself so much hassle in the end. It’s a lot more interesting to work on the bleeding edge rather than the old: “we did a fast version, now we saw it’s not as great and need to change it”. Typical example is the MySite Memberships Page on the MyProfile. Once the links are added it’s hard to impossible to change or remove them. Confusing and frustrating. Oh well, let’s not get too far away from the purpose of this post.

Basically what I wanted you to understand is that it makes sense to discuss these topics because they are usually not discussed by developers, but every infra-engineer should know them by heart at a certain point in time.

This article of the series focusses on the configuration file I attached below but will also copy in here:

<?xml version="1.0" ?>
<Configuration>
  <Farm>
    <Database>
        <DBPrefix>SP2013</DBPrefix>
    </Database>
    <ObjectCacheAccounts>
        <SuperUser>meiringer\spcacher</SuperUser>
        <SuperReader>meiringer\spcachew</SuperReader>
    </ObjectCacheAccounts>
  </Farm>  
  <SharePoint Version="14" />
  <WebApplications AddURLsToHOSTS="false">
        <WebApplication type="Portal"
                        name="Portal"
                        applicationPool="Portal"
                        applicationPoolAccount="meiringer\spapp"
                        url="http://portal.meiringer.com"
                        port="80"
                        UseHostHeader="true"
                        AddURLToLocalIntranetZone="true"
                        databaseName="C_Portal_001"
                        useClaims="true"
                        useBasicAuthentication="false"
                        useOnlineWebPartCatalog="false">
            <!-- You can now specify a different DB server/instance or alias per web application and service application. The behavior is slightly different than with the farm DB server though, see below. -->
            <Database>
                <!-- If you are creating an alias (recommended!), <DBServer> is actually the value of the SQL alias; otherwise it's the NetBIOS name of the SQL server or instance. 
                     If you leave <DBServer> blank, the default DBServer value for the farm is used -->
                <DBServer>sharepointdb</DBServer>
                <!-- The script can create a SQL alias for you. Enter the DBInstance, and if you leave <DBPort> blank, script will assume default port value of 1433 -->
                <DBAlias Create="false"
                         DBInstance="DONT CARE!"
                         DBPort="" />
            </Database>
            <ManagedPaths>
                <ManagedPath relativeUrl="hlp" explicit="true" />
                <ManagedPath relativeUrl="lb" explicit="true" />
                <ManagedPath relativeUrl="ws" explicit="false" />
            </ManagedPaths>
            <SiteCollections>
                <SiteCollection siteUrl="http://portal.meiringer.com/lb"
                                HostNamedSiteCollection="false"
                                Owner="meiringer\spapp"
                                Name="Loadbalancing"
                                Description="Loadbalancing Test"
                                SearchUrl=""
                                CustomTemplate="false"
                                Template="STS#1"
                                LCID="1033"
                                Locale="en-us"
                                Time24="false">
                </SiteCollection>
            </SiteCollections>
        </WebApplication>
    </WebApplications>
</Configuration>

Basically this is the web applications section of the autospinstaller xml configuration file. The reason why I wanted to separate it from the farm creation is because then I can create web applications after the farm is created as well. A couple of configurations are necessary, so I had to add them, like the SharePoint Version (which differs from the original, because it was called Install SPVersion=’2013′) but I didn’t want the added overhead of checking the setup.exe in the install path, so I changed that to the version 14/15 instead of the year, because I don’t need the year anyway. The version is necessary to find the language packs that are installed that you can specify in the site collection section as locale.

Next to that I am also planning to add more to the script so I can add the domain users group or authenticated users group to the loadbalancing site, or at least add the loadbalancing account automatically to the site collection users.

The ManagedPaths part is interesting and one of the two reasons (besides the site collections you can also include) this scripted approach makes a lot of sense for. Of course this procedure can be performed unattended so it is much nicer to begin with but the great part is that you can store this file and start with it next time you need to do this again without having to keep all of your (naming) conventions in mind again. This mask for creating the web application in Central Admin really makes me crazy because depending on what you select in some fields other fields change their value as well…(e.g. SSL yes/no adds port to the host header field) and you need to really check every field before submitting. Using a configuration file you have them at one glance: Way better!

To be honest whoever wants to deviate from these settings please make sure to have the AutoSPInstallerFunctions.ps1 ready. For the alias for instance. I didn’t test this script to exhaustion.

The object cache users are mandatory in this script just like the DBPrefix value and pretty much all the given fields. Also you should make sure that the managed accounts are already registered with SharePoint. That might just be an additional improvement in the future to add this because with app pool isolation it’s actually quite common to have a new account for each new web application.

So at the end of this we have ourselves a web application that is or isn’t connected to the online app catalogue, has the cache users registered, has a sensible database name and already has managed paths and site collections. Now we can focus on everything Non-SharePoint (quite weird for a SharePoint Blog if you think about it).

Continue Reading…Part III

Back To Overview

Attachments:

Automation of Web Application Creation in Managed Environments (Part III: Request Certificates)

This article is based on one of my oldest scripts. It gets a certificate via the windows tool certreq.exe which is located in the system directory.

In general why would you need this script? Well you can also read each of the inputs via the command line but that will really get boring because the subject is always the same except for the hostname and all the other settings are always the same as well.

In an IT organization of a certain size you will not be the owner of the certificate authority. You might even look to use an external provider for this service. So you need to create a certificate request that somebody approves on the other end and then you will get a certificate back that you need to add to the certificate store of your server so that any communication between this server and any client can be signed with this certificate.

This is the calling script. It actually doesn’t do too much than get the input file and the actual script and will run the script with each line of the file. Don’t worry about why I did the string splitting, it is a relic from when IPs were included in the certificate as SAN (Subject Alternative Name) which is no longer something I do, because it provides machine information and thus gives and attacker information he should not have.

$path = ([string] (Split-Path -parent $MyInvocation.MyCommand.Definition))

$txtpath = ($path + "\" + "Definition.txt")
$ps1path = ($path + "\" + "Get-Certificate.ps1")

Write ("txt path: " + $txtpath)
Write ("ps1 path: " + $ps1path)

if( Test-Path($txtpath) ) {
    if( Test-Path($ps1path) ) {
        Get-Content $txtpath | Foreach-Object { 
            $splitted = $_.Split(";")
            $subdomain = $splitted[0]
            
            $expression = "powershell.exe $ps1path $subdomain"
            
            invoke-expression $expression;
        }
    } else {
        Write "Error finding resource file:"
        Write ("ps1 file (" + $ps1path + ") should be in the same directory as this script")
    }
} else {
    Write "Error finding resource files:"
    Write "txt and ps1 files with name of server should be in the same directory as this script"
}

The more interesting of the two scripts is actually the core or callee script, which brings everything together. It takes the input parameter and meshes that with all the static information to create an input file that will in turn be passed to certreq.exe that will then create a certificate request. So you might say it is a certificate request request. 😉

The interesting part of the script is actually the naming of the certificate file. It gets the server name via the DNS Class and adds the fqdn to it. The certificate will have .cer as an extension. The input file has .txt as an extension and the request has .req as an extension. Any files that are no longer needed are cleaned up during the process and before a new certificate is created the script deletes any old certificates. You can also see that the key length is 2048. So that’s hopefully pretty secure. The input file has a specific format, so that’s the reason why I write each line separately. You can see that at one point I missed to add the hyphens to the subject before adding it to the file. That was basically not such a good idea. It created certificates, but they did not work as expected.

$path = ([string] (Split-Path -parent $MyInvocation.MyCommand.Definition))

if ($args -eq $null -or $args.Length -lt 1) {
    Write "Usage: "
    Write "------ "
    
    Write "Param 1 - fqdn"
} else {
    Write ("argument 1: " + $args[0])
    
    $server = [System.Net.Dns]::GetHostName()
    $FQDN = $args[0].ToUpper()
    
    $name = ($server + "_" + $FQDN);
    
    $subfolder = (Get-Date).ToString("yyyy-MM-dd");
    
    $outpath = ($path + "\" + $subfolder + "\");
         
    if(-not (test-path($outpath))) {
        $f = new-item -Path $outpath -ItemType "Directory";
    }

    $txtFile = ($outPath + $name + ".req.txt")
    $outFile = ($outPath + $name + ".req")
    
    $subject = ("CN=" + "$FQDN,E=contact@meiringer.com,OU=Global IT,O=meiringer AG,L=Wiesbaden,S=Hesse,C=DE")
    $san = "dns=$FQDN"
    
    Write ("name: " + $name)
       
    Write ("outPath: " + $outPath);
    Write ("outFile: " + $outFile);
    Write ("txtFile: " + $txtFile);

    if(Test-Path($txtFile))
    {
        Remove-Item $txtFile -confirm:$false
    }
    
    if(Test-Path($outFile))
    {
        Remove-Item $outFile -confirm:$false
    }
    
    Write ("subject: " + $subject)
    Write ("san: " + $san)
    
    Write "[Version]" > $txtFile
    Write ("txtFile = `"`$Windows NT`$`"") >> $txtFile
    Write "" >> $txtFile
    Write "[NewRequest]" >> $txtFile
    Write ("Subject = `"" + $subject + "`"")  >> $txtFile
#    Write ("Subject = " + $subject)  >> $filePath
    Write "KeySpec = 1" >> $txtFile
    Write "KeyLength = 2048" >> $txtFile	  	
    Write "KeyUsage = 0x30" >> $txtFile 	  	
    Write "RequestType = CMC" >> $txtFile
    Write "ProviderName = `"Microsoft RSA SChannel Cryptographic Provider`"" >> $txtFile
    Write "Providertype = 12" >> $txtFile
    Write "SMIME = FALSE" >> $txtFile
    Write "SILENT = TRUE" >> $txtFile
    Write "MACHINEKEYSET = TRUE" >> $txtFile
    Write "" >> $txtFile
    Write "[RequestAttributes]" >> $txtFile
#    Write ("SAN = `"" + $san + "`"") >> $filePath
    Write ("SAN = " + $san) >> $txtFile
    
    $execPath = "C:\Windows\System32\certreq.exe"
    Invoke-Expression -command "$execPath -New -machine $txtFile $outFile"
    
    if(Test-Path($txtFile))
    {
        Remove-Item $txtFile -confirm:$false
    }
}

The last part is basically invoking the certifcate request tool with the textfile and outfile as parameters. The script will basically open a few windows while it runs, but it is quite quick so that’s not going to be too irritating. I should also mention that you can add multiple Subject Alternative Names, if you like. All you need to do is separate them by commas and you are good to go. If you want to do that, piece them together explicitly to keep a single identifier available or add another parameter for the name. The subject is basically only necessary for identification, then SAN part is what will later be used for the web sites and need to fit to the host names.

For completion sake this is what the input file “definitions.txt” looks like. Each line represents a hostname and thus will result in a certificate.

somehost.somedomain.extension
someotherhost.somedomain.extension
athirdhost.somedomain.extension

Continue Reading…Part IV

Back To Part II
Back To Overview

Attachments: