SPSocialFeedManager.GetFeed: Exception: System.UriFormatException: Invalid URI: The hostname could not be parsed.

Today I fixed the second part of one of the craziest SharePoint 2013 issues of my year.

This issue to begin with only exists until January CU 2017 of SharePoint 2013. Since we are rapidly approaching 2019 I hope most of your farms are already past this update level.

However I have one customer that is still on a 2016 patch level: 15.0.4815.1000 (April 2016 CU).

As usual, we are currently in the process of updating and QA and UAT are already patched. However I was able to get my hands on a machine with the same patch level as production and was able to reproduce the issue.

The issue is extremely rare and to my shame I didn’t find it, a colleague did. Take note of the fixed issues list of KB3141477 (January CU 2016 for SharePoint Server 2013).

When you create a new newsfeed post that includes an app URL with an incorrect protocol (http:// instead of https://), the newsfeed page is broken and it doesn’t display news. This update ensures any problem in one of the news item should not block from rendering the newsfeed page.

What this means as an example: Somebody installs a sharepoint-hosted app (add-in) in your environment. Somebody copies the address of the app (https://app-%5BGUID%5D.%5Bappdomain%5D) or any link starting with this into your SharePoint 2013 microfeed. But the important part is, he/ she removes the s from the ssl-protocal before pasting. This works in the other direction as well of course (you have an app host with http:// and he/ she adds an “s” after “http”).

The occurring issue is that your user will receive a message that this didn’t work as expected “we’ve hit a snag”.

The user will click on “OK” and think he/ she is done with it. However far from the truth. Refreshing the page will show a corrupted microfeed with a cryptic message (which is not so cryptic, if you know what’s up…but more later…)

Something went wrong
SharePoint returned the following error: Invalid URI: The hostname could not be parsed. Contact your system administrator for help in resolving this problem.

So now what?

The ULS logs will tell you a bit more of course, but not sure if you will make the connection to delete the item from the microfeed list.

Request for app scheme does not match the webapp's scheme for this zone. Request Uri: http://app-a5ced8ff740661.apps-...
Request for app scheme does not match the webapp's scheme for this zone. Request Uri: http://app-a5ced8ff740661.apps-...
SPSocialFeedManager.GetFeed: Exception: System.UriFormatException: Invalid URI: The hostname could not be parsed.     at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind)...

So you see this and you don’t know what it means. Well as mentioned above it has to do with the fact that somebody added a link in the microfeed and used the wrong protocol for the app domain. So you will go to the microfeed list and see that only the farm administrator account can actually delete items there with the right permission of “Social Data” in the User Profile Service. You can use this script to delete them via powershell (if you have server access):

Add-PSSnapin Microsoft.SharePoint.Powershell -ea 0;

$site = "";
$itemIds = @();

$spweb = get-spweb $site;
$microfeed = $spweb.lists["MicroFeed"];
$itemId = $null;
foreach( $itemId in $itemIds ) {
    if( $itemId -ne $null ) {
        Write-Host ("ItemId: " + $itemId);
        $item = $null;
        $item = $microfeed.GetItemById($itemId);


Once you delete this item, you will find the feed on the site itself is fine (after a refresh!). However if in the meantime you have been on the mysite host and checked your stream and you have been following this particular site, where the person added the url you will find that feed broken as well. You might be lucky like me and it’s not. But some users might still be affected. You can stop following that room but that will probably not be a solution for users productively using SharePoint.

Why is the error still there even though the item has already been deleted? Caching. If you want to delete your browser cache and try again you will find this doesn’t help. You will then try to clear the timer cache or do an iisreset. Still no dice. What actually helps is to clear the DISTRIBUTED CACHE (appfabric).

You can do that via the following script, which targets specifically only the activity feed container:

Add-PSSnapin Microsoft.SharePoint.Powershell -ea 0;
Clear-SPDistributedCacheItem -ContainerType DistributedActivityFeedCache;

That will help. You can find more information on this command here. What didn’t help for me which was a red herring were Update-SPRepopulateMicroblogFeedCache and Update-SPRepopulateMicroblogLMTCache. Btw the required parameters for the call Update-SPRepopulateMicroblogFeedCache change between April 2016 CU and July 2018 CU.

So in summary:

If you find you have this similar issue: Don’t waste your time. Check your farm version, delete the list item, flush the cache. Plan to update to a recent update, asap. I hope two professional days of my life turn out to be just minutes for you.


App-Only Authentication in SharePoint Provider Hosted Apps

In the article I wrote a few weeks ago: Renew Certificate in Provider Hosted Apps Scenario I provided information on how to renew an expired certificate in the context of Provider Hosted Apps. What I missed during this activity was a simple flag when creating the trusted security issuer.

New-SPTrustedSecurityTokenIssuer -Name “$issuerName” -RegisteredIssuerName “$regIssuerName” -Certificate $certificate -IsTrustBroker

The thought process behind this little change is the reason for this article. One and a half work days went into meetings, research and tests. The solution was found in a team effort that was possible because we concentrated for a substantial timespan without interruption.

The difficulty of finding the issue is because outputting the SharePoint Object via below command

Get-SPTrustedSecurityTokenIssuer | ? { $_.Name -eq “$issuerName” }

will not give indication whether or not the -IsTrustBroker flag has been set.

This article gave me a bit of insight into the New-SPTrustedSecurityTokenIssuer command


Significance/additional info of the cmdlets

  1. issuerID : assigning the GUID generated in the previous step
  2. publicCertPath : path where I saved my .cer file.
  3. web : your Developer site URL
  4. realm : should be the same as your farm ID
  5. New-SPTrustedSecurityTokenIssuer : Just a tip, when you use the Name parameter it can be helpful to include a readable name, such as “High Trust App” or “Contoso S2S apps” instead of the issuer ID.
  6. -IsTrustBroker: this flag ensures that you can use the same certificate for other apps as well. If you don’t include this, you might receive “The issuer of the token is not a trusted issuer” error. So we have two possible approaches each having their own pros and cons .i.e. use the same certificate shared by multiple apps Or use a separate certificate for each app. Read additional details at Guidelines for using certificates in high-trust apps for SharePoint 2013
  7. iisreset : to ensure the Issuer becomes valid, else it takes 24 hours.

The context and symptom of this issue:


SharePoint Provider Hosted Apps run on the PHA environment. They can create client contexts on the app webs on SharePoint side.

This is possible by either using the existing user token


or creating one via an app-only token.

TokenHelper.GetClientContextWithAccessToken(_hostWeb.ToString(), appOnlyAccessToken)


Only for the first of the two options this is possible.

An error is thrown when trying to execute a query to retrieve items via a caml query.

This has been working until the certificates used for the PHA-SharePoint trust expired.


at System.Net.HttpWebRequest.GetResponse()

at Microsoft.SharePoint.Client.SPWebRequestExecutor.Execute()

at Microsoft.SharePoint.Client.ClientRequest.ExecuteQueryToServer(ChunkStringBuilder sb)

Button_Click: The remote server returned an error: (401) Unauthorized.


Renew Certificate in Provider Hosted Apps Scenario

With a certain customer of mine I recently had an issue, where in the span of a month all of the certificates for the Provider Hosted Apps Domain (PHA) had to be renewed for four staging environments (including PROD).

I was lucky to be the successor of somebody who made the same mistake as the service provider a month later, so I was prepared and could save the day. In hopes of saving you the time it took for me to force the service provider’s hand (around 10 hours telco time) I want to give you a brief overview of how to tackle this, the full list of reference articles and a script to set the sharepoint part (trust).

First a short introduction.

Certificates. Certificates are often used to encrypt data communication between machines. This is done to make sure that two parties can communicate without a third party listening. Also this is done to verify the identity of somebody initiating communication.

In the scenario of SharePoint and PHA we have two parties. We have the PHA Server Farm and the SharePoint Server Farm. Usually each farm consists of at least 2 servers for redundancy/ high availability reasons.

When HTTP communication is done via SSL each WebSite in IIS has a binding on port 443, which uses a certificate for encrypting the data he site responds with to requests.

Any admin can swap the certificate in IIS. All you need to do is check the certificate that exists and request a new certificate either self-signed, internally trusted or externally trusted with the correct SAN (Subject Alternative Name).

As an example, let’s suggest the following setup:
SharePoint has a wildcard certificate, like *.apps.mycompany.com. The PHA environment has a certificate corresponding to this in apps.mycompany.com. This may be the same certificate, if you request the big kahuna, i.e. a multi-san, wildcard certificate. Usually this is not the case, and is not necessary.

The PHA IIS will have the apps.mycompany.com certificate, and SharePoint will have the wildcard certificate. However how does SharePoint make sure, that PHAs are not added to different server and this server has different code and pretends to be the PHA server? There is a trust between these servers on the SharePoint side. In essence this article has one message: “Don’t forget this trust!”

The underlying process of replacing the apps.mycompany.com certificate is based on four easy steps, all of them are necessary:

  1. Replace the apps.mycompany.com certificate in the IIS of each PHA server

    This is a no-brainer. Request the certificate, get the response, use certmgr.msc to import the certificate into the Personal Store of the Machine Account. Make sure to have a private key for the certificate. This can be self-signed, internally trusted or externally trusted (depending on your scenario, if you externalize your farm or not).

  2. Export the apps.mycompany.com certificate as pfx (with private key)

    Export it with private key (and password) and put it into the location, where the web.config of each Provider Hosted App can access it. Usually this certificate is stored in a central location on each IIS PHA Server.

  3. Export the apps.mycompany.com certificate as cer (without private key)

    Export it without private key and put it into a location on a SharePoint server, where you can access it from the SharePoint Powershell script in the next step.

  4. Replace the SharePoint trust via script

    The certificate (cer) is referenced in two locations in SharePoint (SPTrustedRootAuthority, STSTrustedSecurityTokenIssuer). You can set it in the SPTrustedRootAuthority by updating the object and by deleting the STSTrustedSecurityTokenIssuer object and recreating this with the correct IssuerName and RegisteredIssuerName ([Issuer GUID]@[Realm]). See Script below.

EDIT: This image differs from the code below. A crucial parameter is missing. line 29 must have the flag “-IsTrustBroker” as seen below. I wrote a specific article on this topic here

param (
[string] $CertificateSubjectAlternativeName = "apps.mycompany.com"
, [string] $CertificatePathLocation = "[MyDrive]:\[MyPath]\apps.mycompany.com.cer"

asnp microsoft.sharepoint.powershell -ea 0

$certificate = $null;
$certificate = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($CertificatePathLocation);

if($certificate -ne $null) {
$tra = $null;
$tra = Get-SPTrustedRootAuthority | ? { $_.Certificate.Subject.Contains(${CertificateSubjectAlternativeName}) }

if( $tra -ne $null ) {
$tra.Certificate = $certificate;
} else {
Write-Host -ForegroundColor Red “Error: No Certificate with SAN ‘${CertificateSubjectAlternativeName}’ found in Root Authority Store.”;

$sci = $null;
$sci = Get-SPTrustedSecurityTokenIssuer | ? { $_.SigningCertificate.Subject.Contains(${CertificateSubjectAlternativeName}) }

if( $sci -ne $null ) {
$regIssuerName = $sci.RegisteredIssuerName;
$issuerName = $sci.DisplayName;
New-SPTrustedSecurityTokenIssuer -Name “${issuerName}” -RegisteredIssuerName “${regIssuerName}” -Certificate $certificate -IsTrustBroker;
} else {
Write-Host -ForegroundColor Red “Error: No Certificate with SAN ‘${CertificateSubjectAlternativeName}’ found in Trusted Security Token Issuer Store.”;
} else {
Write-Host -ForegroundColor Red “Error: Certificate not found at location ‘${CertificatePathLocation}’.”;

The last step, which is not mandatory, but we had to do it was on the IIS Servers of the PHA environment. The certificate gets cached by the UserProfile of the User running the app pool. Thus once you replace it is no longer able to find the file. This will be broadcasted by an ugly error like: ‘CryptographicException: The system cannot find the file specified.’

This is how to fix that: open IIS –> ApplicationPools –> DefaultAppPool –> “Right Click” –> Advanced Settings –> Load User Profile | set this value to “true”.

It seems a bit absurd to change this setting since it did not have to be set when configuring the PHA connection in the first place, but it does the trick.


Read more of this post

Unexpected Response from Server when updating SharePoint ListItem via JSOM

These days I am working a lot on the client-side of things. So a couple of months ago I started writing my first lines of JavaScript/ JSOM (Javascript (Client)-Side Object Model).

I wrote a small method to create list items in a list (listTitle) based on a collection/ list of properties (properties) and their respective values. Here it is:

function createListItem(listTitle, properties) {
    var d = $.Deferred();
    try {
        var ctx = SP.ClientContext.get_current();
        var oList = ctx.get_web().get_lists().getByTitle(listTitle);

        var itemCreateInfo = new SP.ListItemCreationInformation();
        oListItem = oList.addItem(itemCreateInfo);

        for (var i = 0; i < properties.length; i++) {
            var prop = properties[i];
            oListItem.set_item(prop.Key, prop.Value);

        var o = { d: d, ListItem: oListItem, List: listTitle };
        function () {
        function (sender, args) {
            o.d.reject("Could not create list item in list " + o.List + " - " + args.get_message());
    } catch (Exception) {
    return d.promise();

function updateListItem(listTitle, properties) {
  var d = $.Deferred();
  try {
        var d = $.Deferred();
        var ctx = SP.ClientContext.get_current();
        var oList = ctx.get_web().get_lists().getByTitle(listTitle);

        oListItem = oList.getItemById(id);

        for (var i = 0; i < properties.length; i++) {
            var prop = properties[i];
            try {
                oListItem.set_item(prop.Key, prop.Value);
            } catch (Exception) {
                console.log(prop.Key + ' ' + prop.Value);
        var o = { d: d, ListItem: oListItem, List: listTitle, p: properties };
        function () {
        function (sender, args) {
            o.d.reject("Could not update list item in list " + o.List + " - " + args.get_message());
    } catch (Exception) {
  return d.promise();

So this is what happened when I had this code execute on editing another item in a different list…

When I debugged using Chrome (my browser of choice when writing JavaScript – never used it before that, interestingly…) I received the error “unexpected response from the server”.

I figured out that there are two key lines in this code that can be the cause of this.

var ctx = SP.ClientContext.get_current();



In my case the first line was actually not responsible for the error message. For your reference if you use var ctx = new SP.ClientContext(url); you may encounter this error message. So make sure to check that.

You should always use the current client context, when using JSOM, similar to the best practice guidelines for opening webs on server-side (SSOM [Server-Side Object Model]).

In my case the second line was the cause for the issue.

When creating an item I need to load the item into the context afterwards (or the error will show up even if the item is created correctly).

When updating an item the item may not be loaded into the context afterwards (or the error will show up even if the item is updated correctly).

It kind of makes sense, because when creating an item you are actually sending an SP.ListItemCreationInformation to the server. When updating an item I already have my listitem object. Why would I need to load all the other information afterwards?

So once I removed the line from the update method the code no longer evaluated to fail and the error message disappeared.

So for the experts among you this may be old news, but I actually needed to think about this for a few minutes before I figured it out, so I thought it was well worth blogging about. Especially since I haven’t blogged for quite some time.

Send A SOAP Message to Nintex Workflow WebService – DeleteWorkflow

Yesterday I was challenged to develop a script that deletes a list workflow on 105 sites and publish it with a new name.

There is a bug within Nintex, where when you copy a site collection the GUIDs of the workflow, the list and the web are the same as in the source site. This confuses Nintex sometimes, in this case regarding conditional start. The conditional start adds an event receiver to the list and the workflow itself is synchronous, so when saving a form this takes a couple of seconds to close because the form waits for the workflow to finish. Even if the workflow is small, this will always take longer than the user expects, so we changed the start condition to always run on change, but used the condition action as first action in the workflow, so the workflow always starts (asynchronously), but ends right away if the condition is not met. So we buy performance by getting more historic Nintex Data.

So back to the task. The publishing of a workflow can be done with NWAdmin, which was my obvious choice to team up with PowerShell to run through the sites of my webapplication and to pulish the workflow. Only publishing the workflow does not help, as the GUID stays the same. We need to decouple the workflow from its history. This can be done by publishing it with a new name (Nintex Support).

The NWAdmin Tool however does not provide a method to delete a workflow. I then looked into the dreaded “using the ie-process as com.application” but the page where you can manage a workflow is really irritating from a DOM-perspective. Also the url click event triggers a javascript method with a confirm-window.

function DeleteWorkflow(sListId, sWorkflowId, sWorkflowType, bPublished) {
    if (bPublished) {
        if (!confirm(MainScript_DeleteWfConfirm))
    else if ((!bPublished) && typeof (bPublished) != "undefined") {
        if (!confirm(MainScript_DeleteUnpublishedWfConfirm))
    else {
        // orphaned workflows
        if (!confirm(MainScript_DeleteOrphanedWfConfirm))
    deletedWorkflowID = sWorkflowId;
    var oParameterNames = new Array("listId", "workflowId", "workflowType");
    if (sListId == "") {
        sListId = "{00000000-0000-0000-0000-000000000000}";
    var oParameterValues = new Array(sListId, sWorkflowId, sWorkflowType);
    var callBack = function () {
        if (objHttp.readyState == 4) {
            if (CheckServerResponseIsOk()) {
                //delete the table row's for this workflow
                var tableRows = document.getElementsByTagName("TR");
                for (var i = tableRows.length - 1; i > -1; i--) {
                    if (tableRows[i].getAttribute("WfId") == deletedWorkflowID) {
    InvokeWebServiceWithCallback(sSLWorkflowWSPath, sSLWorkflowWSNamespace, "DeleteWorkflow", oParameterNames, oParameterValues, callBack);

As you can see there is an if-clause which sends a confirm-window in any case. So I could not use this method. But thankfully I found the last line
InvokeWebServiceWithCallback(sSLWorkflowWSPath, sSLWorkflowWSNamespace, “DeleteWorkflow”, oParameterNames, oParameterValues, callBack);

That took me on the right track.

I looked into the method, but that was the less efficient way of approaching the problem. The link to the webservice would have gotten me further (/_vti_bin/NintexWorkflow/Workflow.asmx?op=DeleteWorkflow).


function InvokeWebServiceWithCallback(sServiceUrl, sServiceNamespace, sMethodName, oParameters, oParameterValues, fCallBack) {
    if (objHttp == null)
        objHttp = createXMLHttp();

    oTargetDiv = null; // prevents the onstatechange code from doing anything

    // Create the SOAP Envelope
    var strEnvelope = "" +
                "" +
                    "" +
                    "" +
                "" +

    var objXmlDoc = CreateXmlDoc(strEnvelope);

    // add the parameters
    if (oParameters != null && oParameterValues != null) {
        for (var i = 0; i < oParameters.length; i++) {
            var node = objXmlDoc.createNode(1, oParameters[i], sServiceNamespace);
            node.text = oParameterValues[i];
            objXmlDoc.selectSingleNode("/soap:Envelope/soap:Body/" + sMethodName).appendChild(node);

    var objXmlDocXml = null;
    if (typeof (objXmlDoc.xml) != "undefined")
        objXmlDocXml = objXmlDoc.xml; // IE
        objXmlDocXml = (new XMLSerializer()).serializeToString(objXmlDoc); // Firefox, mozilla, opera

    objHttp.open("POST", sServiceUrl, true);
    objHttp.onreadystatechange = fCallBack;
    objHttp.setRequestHeader("Content-Type", "text/xml; charset=utf-8");
    objHttp.setRequestHeader("Content-Length", objXmlDocXml.length);
    if (sServiceNamespace.charAt(sServiceNamespace.length - 1) == "/")
        objHttp.setRequestHeader("SOAPAction", sServiceNamespace + sMethodName);
        objHttp.setRequestHeader("SOAPAction", sServiceNamespace + "/" + sMethodName);

In any case I developed the script to run the delete workflow method via soap and that’s what I want to share with you below.

The script deletes exactly one workflow on a list in a given web based on the id. The ID of the Workflow can be retrieved from the nintex configuration database.

SELECT workflowid, workflowname
  FROM [Nintex_Config].[dbo].[PublishedWorkflows]
  where workflowname = '[Workflow A]'
  group by workflowid, workflowname

For those of you who panic when seeing/ reading SQL, you can also get the ID from the page (the link) itself, but that kind of defeats the purpose of automating the task of deletion, because you would need to go to every management page to get all ids…but I guess anybody still reading this is not panicking yet…

btw the export-workflows nwadmin command does not give you the ids of the workflows…

but if you want to get the ids in a different way you can use the following powershell:

$w = get-spweb "[WebUrl]";
$l = $w.lists["[ListTitle]"];
$l.WorkflowAssociations | select baseid, id, name

The ID you want to use is the baseid.

Back to the SOAP Script…

I am sending the request with the default credentials…this may be something you will want to check. Check out the System.Net.NetworkCredential type, if you want to add a dedicated user to run the call with. Don’t forget the security implications… 😉

The issue I had was, that I forgot the xml header, starting with a different content-type and the real big issue: I forgot to set the action in the header. That’s the critical point. If you don’t do that you will get a 200 HTTP Response Code, but nothing will happen. After a couple of hours I was satisfied with my result. Here it is…

param (
    [string] $WebUrl = "[MyUrl]",
    [string] $ListTitle = "[MyListTitle]",
    [string] $WorkflowId = "[GUID of Workflow without parentheses]"

asnp microsoft.sharepoint.powershell -ea 0;

$spweb = get-spweb "$Weburl";
$splist = $spweb.lists | ? { $_.Title -eq "$ListTitle" -or $_.RootFolder.Name -eq "$ListTitle" }
$splistid = $splist.id.toString("B");

$WebServiceBase = $WebUrl;
$WebServiceMethod = "_vti_bin/NintexWorkflow/Workflow.asmx";
$Method = "POST";
$ContentType = "text/xml; charset=utf-8";

$soapEnvelope = "" +
                "" +
                    "" +
                        "" + $splistid + "" +
                        "{" + $workflowid + "}" +
                        "List" +
                    "" +
                "" +

$req = [system.Net.HttpWebRequest]::Create("$WebServiceBase/$WebServiceMethod");
$req.Method = $method;
$req.ContentType = "text/xml; charset=utf-8";
$req.MaximumAutomaticRedirections = 4;
#$req.PreAuthenticate = $true;

$req.Credentials = [System.Net.CredentialCache]::DefaultCredentials;

$req.Headers.Add("SOAPAction", "http://nintex.com/DeleteWorkflow");
$encoding = new-object System.Text.UTF8Encoding
$byte1 = $encoding.GetBytes($soapEnvelope);

$req.ContentLength = $byte1.length;
$newStream = $req.GetRequestStream();

$newStream.Write($byte1, 0, $byte1.Length);

$res = $null;
$res = $req.getresponse();
$stat = $res.statuscode;
$desc = $res.statusdescription;

Download Prerequisites

While I was doing some installation for the TAP of one of my customers with strong restrictions (no internet connectivity for my test server) I had to develop a script for downloading prerequisites. You will say, but why? There are scripts online that do that for you…well the ones I found were all hard-coded for SPS2014 Prerequisites.

Of course I cannot disclose which prerequisites these are, but I can share the script and how to get the prerequisites in a text file.

So this is the code you need to download the prerequisites.

param (
[string] $InputFilePath = "mytextfile.txt",
[string] $OutputPath = "[some drive letter]:\somedirectory\"

$webclient = New-Object System.Net.WebClient
$creds = Get-Credential -Credential ([Security.Principal.WindowsIdentity]::GetCurrent()).Name;
$webclient.Proxy.Credentials = $creds

$urls = get-content $InputFilePath;

foreach( $url in $urls ) {
$webRequest = [net.WebRequest]::Create($url)
$webresponse = $webrequest.GetResponse();
$fileName = $webresponse.responseuri.segments[$webresponse.responseuri.segments.Length -1]
$filePath = $OutputPath + $fileName;


The text file looks something like this…


The text file can be generated by using one of the following links (using strings.exe or process explorer on a machine that cannot host sharepoint):


I used sysinternals\strings.exe. That worked very well.

Recreate Office Web Apps // Proxy

Long time, no blog. Lots to do, and worth blogging about, but I just cannot find the time. Hopefully after March I will.

Recently at a customer I had to recreate the office web apps farm. As I have never done that before I tried naively:
Install the certificate, set the correct URLs on the server and recreate the SPWopiBindings.

Well there was a Proxy in my way, and the URL I wanted to use (spofficewebapps.customer.tld) was not in the list of exceptions.

So it didn’t work (adding the spwopibinding).

Office Web Apps Server:

$dns = "spofficewebapps.customer.tld"
set-location Cert:\LocalMachine\My
$cert = gci | ? { $_.DnsNameList.Unicode -eq $dns } | select -First 1;
$cert.FriendlyName = $dns
Set-OfficeWebAppsFarm -InternalURL "https://$dns" -ExternalUrl "https://$dns" -CertificateName "$dns"

SharePoint Server:

Remove-SPWopiBinding -All:$true -Confirm:$false
New-SPWopiBinding -ServerName "spofficewebapps.customer.tld"

What I got is that the Server was not available. Like this:
But my certificate was there, I could reach the https://spofficewebapps.customer.tld/hosting/discovery/ just fine and so none of the results from Google fit my bill.

What now? Well here is the list of remedies:
– Add the new URL to the list of Proxy exceptions
– Do not use Set-OfficeWebAppsFarm, but rather destroy and create (see below)
– Restart all servers involved

Then another thing: My servers aren’t getting the Proxy exceptions pushed. So I had to add them to Internet Explorer manually.

Good Code on Office Web Apps:

$dns = "spofficewebapps.customer.tld"
New-OfficeWebAppsFarm -InternalUrl "https://$dns" -CertificateName "$dns" -EditingEnabled -LogLocation "D:\OWA-LOGS" -RenderingLocalCacheLocation "D:\OWA-CACHE"

So after all that I was finally able to add the office web apps back. By the way a host file entry on the SharePoint Server to the Office web apps Server DID NOT HELP.

Update Theme on all Webs of Office365

So I have been doing a small migration project for a customer moving to Office 365. Finally I have a reason to bother working on Office 365. I have to say upfront: this is not a finished platform (yet!). Great potential, though.

As usual the initial idea is something completely different than what I spent most of my time on up to now.

Care to venture a guess?…of course: design. As usual theming is a b**** because you cannot deploy a solution on Office 365.

First I tried all the usual stuff: UI, but there are about 30-40 Sites (Subsites).
CSOM/ Webservices (CSOM is using the api webservices, if I am correct…).
Too bad the SPWeb.ApplyTheme Method doesn’t work as intended. Funny how Microsoft is, the method has four parameters. If you give it 3 it will tell you: you gave me only 3, I need 4, even though Technet tells you it’s fine to pass $null values, but that’s no biggy, because you could use dummy-data, right? So if you do that and you pass 4 values you get: “Too many resources requested” or a similar message translated from German (customer wants it in 1031 – good, that Office 365 has all the lang packs).

So the result I created is a PowerShell Script combining the chocolatey goodness of SPO Management PowerShell (get all the SPO-Site Objects of your tenant) with the caramely filling of SCOM (you cannot get any Web Objects, so we use SCOM for that)…and to top it off we use sweet-old Internet Explorer as a COM Application to fill out the form for applying the theme for each of the webs while iterating.

I would have liked to do it differently. In the traditional On-Premise Shell I could have used a one-line script to get this done. I could have guessed it in the beginning, that it would be a bit more difficult, but three different paradigms to get a simple thing done like theming-automation is a bit hilarious. That doesn’t compare to anything – at all!

I would have been fine with combining Office 365 Management Shell with SCOM. Okay – I mean the Management Shell includes only about 30 Commandlets at this point. It’s pretty much only good for creating Site Collections, emptying the recycle bin (which cannot be done at the moment via UI as I write this post – meeh) and adding users to SharePoint groups.

So we all know how bad this solution is – I am not even going to try to sell this to you as a good idea. It doesn’t work reliably. But it does reduce your workload. So that’s definitely sensible. I am imagining my customer telling me: “That theme is not good enough. Let’s use a different one.” This is probably what will happen. It usually does.

So it took me about the same time to complete the script as I would have to go through all of these webs and done all of the changes manually. So I have already won. Maybe I can help somebody else this way as well, so I am sharing the script here.

Keep in mind that you need to use the admin url to connect to the SPO-Service. Check this article on how to set up your environment accordingly. Set up the SharePoint Online Management Shell Windows PowerShell environment.

You should be aware, that if you wanna do this on a SharePoint 2010 machine you will have to open your powershell or your ISE with the suffix “-v2” like “%windir%\system32\WindowsPowerShell\v1.0\PowerShell_ISE.exe” because SharePoint 2010 will not work with .NET 4.0 when you install the management shell.

So the script needs the following:
– you need a palette and you can create that using an existing palette and edit it with the
– you need the management shell for Office 365 installed
– you need an Office 365 tenant and the credentials to access (duuuh!)

The script does the following:
– iterate over all sites that are not search or mysite (use oslo.master) or the public site
– when you get the rootweb from the site url: this is where you upload the palette (theme)
– get all the webs and iterate over them
– for the next step you will need to have an ie opened and authenticated against your tenant
– apply the theme via the two forms (you may think that because the second form has its own url you can skip the first one, but this one actually creates a cache version of the theme, so you will need to fill out the first form to be able to fill out the second one successfully).
– you will also say: why does he need the sleep commands? any good script will work without, but this is not one of those. We actually have to wait for the requests to be responded. It may even take a couple of seconds more than I have in my script. The anchors I use for clicking are not going to be found from the getelementby* methods if there isnÄt enough time in between.

So here is the “masterpiece”. Drop me a comment if it does in fact help you. Apart from that it may be a stepping stone for a much nicer script in the future.

param (
[string] $LocalPalettePath = “C:\Backup\my-palette.spcolor”,
[string] $username = “myusername@mytenant”,
[string] $password = “mypassword”,
[string] $url = “mytenant-admin.url”

function Process-File($Context, $File, $RemoteFolder) {
Write-Output (“Uploading ” + $File.FullName);
$FileStream = New-Object IO.FileStream($File.FullName,[System.IO.FileMode]::Open)
$FileCreationInfo = New-Object Microsoft.SharePoint.Client.FileCreationInformation
$FileCreationInfo.Overwrite = $true
$FileCreationInfo.ContentStream = $FileStream
$FileCreationInfo.URL = $File.Name
$Upload = $RemoteFolder.Files.Add($FileCreationInfo)

function Upload-Palette($ctx, $web, [string] $localPalettePath) {

#Write-Host “Web URL is” $web.Url
if($web.ServerRelativeUrl -ne “/”) {
$remoteFolderRelativeUrl = $web.serverRelativeUrl + “/_catalogs/theme/15/”;
} else {
$remoteFolderRelativeUrl = “/_catalogs/theme/15/”;
$remoteFolder = $web.getFolderByServerRelativeUrl($remoteFolderRelativeUrl);

$file = get-item $localPalettePath;
Process-File $ctx $File $RemoteFolder;

function CreateTheme($ie, $ctx, $web, [string] $localPalettePath, [string] $rootWebUrl, [string] $rootWebRelativeUrl) {

$baseUrl = $web.Url.Replace($web.ServerRelativeUrl, “”);
$relativeWebUrl = $web.ServerRelativeUrl;

$localPaletteItem = get-item $localPalettePath
$localPaletteName = $localPaletteItem.Name;

$url = $web.Url;

if($relativeWebUrl -eq “/”) {
$relativeBase = “”;
} else {
$relativeBase = $relativeWebUrl;
if($rootWebRelativeUrl -eq “/”) {
$relativeBaseRoot = “”;
} else {
$relativeBaseRoot = $rootWebRelativeUrl;

$gallery = “/_layouts/15/start.aspx#/_layouts/15/designbuilder.aspx?masterUrl={0}&themeUrl={1}&imageUrl={2}&fontSchemeUrl={3}”;

$masterUrl = $relativeBase + “/_catalogs/masterpage/seattle.master”;
$themeUrl = $relativeBaseRoot + “/_catalogs/theme/15/$localPaletteName”;
$fontUrl = $relativeBaseRoot + “/_catalogs/theme/15/SharePointPersonality.spfont”;
$imageUrl = “”;

$masterUrlEncoded = [System.Web.HttpUtility]::UrlEncode($masterUrl);
$themeUrlEncoded = [System.Web.HttpUtility]::UrlEncode($themeUrl);
$fontUrlEncoded = [System.Web.HttpUtility]::UrlEncode($fontUrl);
$imageUrlEncoded = [System.Web.HttpUtility]::UrlEncode($imageUrl);

$formUrl = $url + [string]::Format($gallery, $masterUrlEncoded, $themeUrlEncoded, $imageUrlEncoded, $fontUrlEncoded);
$ie.Visible = $true;
sleep 5;

“First Form:”

$ieDoc = $ie.Document;

$div = $ieDoc.getElementById(“ms-designbuilder-main”)

$anchor = $div.GetElementsByTagName(“a”) | ? { $_.id -and $_.id.endswith(“btnLivePreview”) }


sleep 2;

“Second Form:”

$ieDoc = $ie.Document;

sleep 15;
$anchor = $ieDoc.GetElementById(“btnOk”);


sleep 2;


sleep 1;

# Control #

$cred = New-Object -TypeName System.Management.Automation.PSCredential -argumentlist $userName, $(convertto-securestring $password -asplaintext -force)
$credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($username, $(convertto-securestring $password -asplaintext -force));

Connect-SPOService -Url $url -Credential $cred

$sposites = get-sposite | ? { -not $_.Url.Endswith(“search”) -and -not $_.Url.Contains(“-public.”) -and -not $_.Url.Contains(“-my.”) }


foreach($sposite in $sposites) {
if($sposite) {
$ctx = New-Object Microsoft.SharePoint.Client.ClientContext($sposite.Url);
$ctx.Credentials = $credentials;

$rootWeb = $ctx.Web
$childWebs = $rootWeb.Webs

Write-Host $rootWeb.Url;

Upload-Palette $ctx $rootWeb $localPalettePath

$app = new-object -com shell.application
$ie = $app.windows() | ? { $_.Name -eq “Internet Explorer” } | select -first 1;

CreateTheme $ie $ctx $rootWeb $localPalettePath $rootWeb.Url $rootWeb.ServerRelativeUrl;


foreach ($childWeb in $childWebs)
CreateTheme $ie $ctx $childWeb $localPalettePath $rootWeb.Url $rootWeb.ServerRelativeUrl;

Copy List Fields, Views and Items From List to List

Today I had to recreate a SharePoint 2013 List because the old one had an error (Content Approval errored out with “Sorry something went wrong” – Null-Pointer Exception).

My first guess was to create a new list and so I did manually. Of course with a dummy Name, so I had to recreate it again. I didn’t want to get stuck having to do it a third time, so I created a little script as seen below.

The script copies list fields and adds them to the new list, then does the same with all the views and then it copies all the items (which was the initial idea) to the new list.

The Input is fairly simple. You need to specify a url to identify the web you want to perform this operation on (you could amend the script to allow providing also a target url, so you can copy the fields, views and items
across site and site collection boundaries. However you might get an issue, for site fields used in your list that do not exist on the target site collection (Publishing Infrastructure, Custom Fields. You will need to do
a bit more than just add a parameter and init another web object). Also this works well for lists, but not for document libraries. Another limitation are content types. I did not include those either.

So you see this is more of a starting point than anything else. But it does the job and it was pretty quick to write, so I thought I would share it with you.

param (
[string] $Url,
[string] $SourceList,
[string] $TargetList

add-pssnapin microsoft.sharepoint.powershell -ea 0;

$spWeb = get-spweb $url;

$spListCollection = $spweb.Lists;

$spSourceList = $spListCollection.TryGetList($SourceList);
$spTargetList = $spListCollection.TryGetList($TargetList);

if($spSourceList) {
if($spTargetList) {
$spTargetList.EnableModeration = $true;

$spSourceFields = $spSourceList.Fields;
$spTargetFields = $spTargetList.Fields;

$spFields = new-object System.Collections.ArrayList;
foreach($field in $spSourceFields) {
if(-not ($spTargetFields.Contains($field.ID))) {
$spFields.Add($field) | Out-Null;

foreach($field in $spFields) {
if($field) {
Write-Host -ForegroundColor Yellow ("Adding field " + $field.Title + " (" + $field.InternalName + ")");

$spViews = new-object System.Collections.ArrayList;

$spSourceViews = $spSourceList.Views;
$spTargetViews = $spTargetList.Views;
foreach($view in $spSourceViews) {
$contains = $spTargetViews | ? { $_.Title -eq $view.Title }
if(-not ($contains)) {
$spTargetViews.Add($view.Title, $view.ViewFields.ToStringCollection(), $view.Query, $view.RowLimit, $view.Paged, $view.DefaultView);


$spSourceItems = $spSourceList.Items;

foreach($item in $spSourceItems) {
if($item) {
$newItem = $spTargetList.Items.Add();
foreach($spField in $spSourceFields) {
try {
if($spField -and $spField.Hidden -ne $true -and $spField.ReadOnlyField -ne $true -and $spField.InternalName -ne "ID") {
$newItem[$spField.InternalName] = $item[$spField.InternalName];
} catch [Exception] { Write-Host -f Red ("Could not copy content " + $item[$spField.InternalName] + " from field " + $spField.InternalName) }
#Write-Host -f Green "Item copied";
} else {
Write-Host -f Red "List $TargetList does not exist";
} else {
Write-Host -f Red "List $SourceList does not exist";

Ensuring an LDAP Claim and what that means for your SPUser Object

So I have a customer using LDAP as an authentication Provider on SharePoint 2010.

I wrote a script a couple of weeks ago, that migrates the permissions of a user from one account to another on either Farm, WebApplication, Site or Web Level (taking into consideration Site Collection Admin Permissions, Group Memberships and any ISecurableObject [Web, List, Item, Folder, Document] RoleAssignments excluding ‘Limited Access’).

The Move-SPUser only does the trick for any situation where you have an existing user object and you create a new user object and then migrate. If the user is actually using both users simultaneously Move-SPUser is not your friend.

This is the reason:

Detailed Description

The Move-SPUser cmdlet migrates user access from one domain user account to another. If an entry for the new login name already exists, the entry is marked for deletion to make way for the Migration.

source: http://technet.microsoft.com/en-us/library/ff607729(v=office.15).aspx


So now I have my script but the difference between ensuring an LDAP Account and an AD Claim is that with the LDAP Account you need to explicitly give the ClaimString. With the AD Account that is not the case.

LDAP ClaimString:


AD ClaimString:


With both the best idea is to follow the following way of ensuring the user:

$claim = New-SPClaimsPrincipal -identity $line.Name -IdentityType “WindowsSamAccountName”;

$user = $spweb.EnsureUser($claim.ToEncodedString());

Additionally with the LDAP Claim the email property is not set. Interestingly enough the email is the Claim identifier though, so the Name-property of the SPUser Object is in this case the email. So you will want to add the following two lines:

$user.Email = $user.Name;


Now you have really ensured that the user object is on the site collection in the same way!