App-Only Authentication in SharePoint Provider Hosted Apps

In the article I wrote a few weeks ago: Renew Certificate in Provider Hosted Apps Scenario I provided information on how to renew an expired certificate in the context of Provider Hosted Apps. What I missed during this activity was a simple flag when creating the trusted security issuer.

New-SPTrustedSecurityTokenIssuer -Name “$issuerName” -RegisteredIssuerName “$regIssuerName” -Certificate $certificate -IsTrustBroker

The thought process behind this little change is the reason for this article. One and a half work days went into meetings, research and tests. The solution was found in a team effort that was possible because we concentrated for a substantial timespan without interruption.

The difficulty of finding the issue is because outputting the SharePoint Object via below command

Get-SPTrustedSecurityTokenIssuer | ? { $_.Name -eq “$issuerName” }

will not give indication whether or not the -IsTrustBroker flag has been set.

This article gave me a bit of insight into the New-SPTrustedSecurityTokenIssuer command

https://blogs.msdn.microsoft.com/shariq/2013/05/07/how-to-set-up-high-trust-apps-for-sharepoint-2013-troubleshooting-tips/

Significance/additional info of the cmdlets

  1. issuerID : assigning the GUID generated in the previous step
  2. publicCertPath : path where I saved my .cer file.
  3. web : your Developer site URL
  4. realm : should be the same as your farm ID
  5. New-SPTrustedSecurityTokenIssuer : Just a tip, when you use the Name parameter it can be helpful to include a readable name, such as “High Trust App” or “Contoso S2S apps” instead of the issuer ID.
  6. -IsTrustBroker: this flag ensures that you can use the same certificate for other apps as well. If you don’t include this, you might receive “The issuer of the token is not a trusted issuer” error. So we have two possible approaches each having their own pros and cons .i.e. use the same certificate shared by multiple apps Or use a separate certificate for each app. Read additional details at Guidelines for using certificates in high-trust apps for SharePoint 2013
  7. iisreset : to ensure the Issuer becomes valid, else it takes 24 hours.

The context and symptom of this issue:

Context:

SharePoint Provider Hosted Apps run on the PHA environment. They can create client contexts on the app webs on SharePoint side.

This is possible by either using the existing user token

spContext.CreateUserClientContextForSPAppWeb()

or creating one via an app-only token.

TokenHelper.GetClientContextWithAccessToken(_hostWeb.ToString(), appOnlyAccessToken)

Symptom:

Only for the first of the two options this is possible.

An error is thrown when trying to execute a query to retrieve items via a caml query.

This has been working until the certificates used for the PHA-SharePoint trust expired.

Stacktrace:

at System.Net.HttpWebRequest.GetResponse()

at Microsoft.SharePoint.Client.SPWebRequestExecutor.Execute()

at Microsoft.SharePoint.Client.ClientRequest.ExecuteQueryToServer(ChunkStringBuilder sb)

Button_Click: The remote server returned an error: (401) Unauthorized.

 

Send A SOAP Message to Nintex Workflow WebService – DeleteWorkflow

Yesterday I was challenged to develop a script that deletes a list workflow on 105 sites and publish it with a new name.

There is a bug within Nintex, where when you copy a site collection the GUIDs of the workflow, the list and the web are the same as in the source site. This confuses Nintex sometimes, in this case regarding conditional start. The conditional start adds an event receiver to the list and the workflow itself is synchronous, so when saving a form this takes a couple of seconds to close because the form waits for the workflow to finish. Even if the workflow is small, this will always take longer than the user expects, so we changed the start condition to always run on change, but used the condition action as first action in the workflow, so the workflow always starts (asynchronously), but ends right away if the condition is not met. So we buy performance by getting more historic Nintex Data.

So back to the task. The publishing of a workflow can be done with NWAdmin, which was my obvious choice to team up with PowerShell to run through the sites of my webapplication and to pulish the workflow. Only publishing the workflow does not help, as the GUID stays the same. We need to decouple the workflow from its history. This can be done by publishing it with a new name (Nintex Support).

The NWAdmin Tool however does not provide a method to delete a workflow. I then looked into the dreaded “using the ie-process as com.application” but the page where you can manage a workflow is really irritating from a DOM-perspective. Also the url click event triggers a javascript method with a confirm-window.

function DeleteWorkflow(sListId, sWorkflowId, sWorkflowType, bPublished) {
    if (bPublished) {
        if (!confirm(MainScript_DeleteWfConfirm))
            return;
    }
    else if ((!bPublished) && typeof (bPublished) != "undefined") {
        if (!confirm(MainScript_DeleteUnpublishedWfConfirm))
            return;
    }
    else {
        // orphaned workflows
        if (!confirm(MainScript_DeleteOrphanedWfConfirm))
            return;
    }
    ShowProgressDiv(MainScript_DeletingWfProgress);
    deletedWorkflowID = sWorkflowId;
    var oParameterNames = new Array("listId", "workflowId", "workflowType");
    if (sListId == "") {
        sListId = "{00000000-0000-0000-0000-000000000000}";
    }
    var oParameterValues = new Array(sListId, sWorkflowId, sWorkflowType);
    var callBack = function () {
        if (objHttp.readyState == 4) {
            if (CheckServerResponseIsOk()) {
                //delete the table row's for this workflow
                var tableRows = document.getElementsByTagName("TR");
                for (var i = tableRows.length - 1; i > -1; i--) {
                    if (tableRows[i].getAttribute("WfId") == deletedWorkflowID) {
                        tableRows[i].parentNode.removeChild(tableRows[i]);
                    }
                }
                SetProgressDivComplete(MainScript_WfDeleteComplete);
            }
        }
    }
    InvokeWebServiceWithCallback(sSLWorkflowWSPath, sSLWorkflowWSNamespace, "DeleteWorkflow", oParameterNames, oParameterValues, callBack);
}

As you can see there is an if-clause which sends a confirm-window in any case. So I could not use this method. But thankfully I found the last line
InvokeWebServiceWithCallback(sSLWorkflowWSPath, sSLWorkflowWSNamespace, “DeleteWorkflow”, oParameterNames, oParameterValues, callBack);

That took me on the right track.

I looked into the method, but that was the less efficient way of approaching the problem. The link to the webservice would have gotten me further (/_vti_bin/NintexWorkflow/Workflow.asmx?op=DeleteWorkflow).

img1

function InvokeWebServiceWithCallback(sServiceUrl, sServiceNamespace, sMethodName, oParameters, oParameterValues, fCallBack) {
    if (objHttp == null)
        objHttp = createXMLHttp();

    oTargetDiv = null; // prevents the onstatechange code from doing anything


    // Create the SOAP Envelope
    var strEnvelope = "" +
                "" +
                    "" +
                    "" +
                "" +
               "";

    var objXmlDoc = CreateXmlDoc(strEnvelope);

    // add the parameters
    if (oParameters != null && oParameterValues != null) {
        for (var i = 0; i < oParameters.length; i++) {
            var node = objXmlDoc.createNode(1, oParameters[i], sServiceNamespace);
            node.text = oParameterValues[i];
            objXmlDoc.selectSingleNode("/soap:Envelope/soap:Body/" + sMethodName).appendChild(node);
        }
    }

    var objXmlDocXml = null;
    if (typeof (objXmlDoc.xml) != "undefined")
        objXmlDocXml = objXmlDoc.xml; // IE
    else
        objXmlDocXml = (new XMLSerializer()).serializeToString(objXmlDoc); // Firefox, mozilla, opera

    objHttp.open("POST", sServiceUrl, true);
    objHttp.onreadystatechange = fCallBack;
    objHttp.setRequestHeader("Content-Type", "text/xml; charset=utf-8");
    objHttp.setRequestHeader("Content-Length", objXmlDocXml.length);
    if (sServiceNamespace.charAt(sServiceNamespace.length - 1) == "/")
        objHttp.setRequestHeader("SOAPAction", sServiceNamespace + sMethodName);
    else
        objHttp.setRequestHeader("SOAPAction", sServiceNamespace + "/" + sMethodName);
    objHttp.send(objXmlDocXml);
}

In any case I developed the script to run the delete workflow method via soap and that’s what I want to share with you below.

The script deletes exactly one workflow on a list in a given web based on the id. The ID of the Workflow can be retrieved from the nintex configuration database.

SELECT workflowid, workflowname
  FROM [Nintex_Config].[dbo].[PublishedWorkflows]
  where workflowname = '[Workflow A]'
  group by workflowid, workflowname

For those of you who panic when seeing/ reading SQL, you can also get the ID from the page (the link) itself, but that kind of defeats the purpose of automating the task of deletion, because you would need to go to every management page to get all ids…but I guess anybody still reading this is not panicking yet…

btw the export-workflows nwadmin command does not give you the ids of the workflows…

but if you want to get the ids in a different way you can use the following powershell:

$w = get-spweb "[WebUrl]";
$l = $w.lists["[ListTitle]"];
$l.WorkflowAssociations | select baseid, id, name
$w.Dispose();

The ID you want to use is the baseid.

Back to the SOAP Script…

I am sending the request with the default credentials…this may be something you will want to check. Check out the System.Net.NetworkCredential type, if you want to add a dedicated user to run the call with. Don’t forget the security implications… 😉

The issue I had was, that I forgot the xml header, starting with a different content-type and the real big issue: I forgot to set the action in the header. That’s the critical point. If you don’t do that you will get a 200 HTTP Response Code, but nothing will happen. After a couple of hours I was satisfied with my result. Here it is…

param (
    [string] $WebUrl = "[MyUrl]",
    [string] $ListTitle = "[MyListTitle]",
    [string] $WorkflowId = "[GUID of Workflow without parentheses]"
)


asnp microsoft.sharepoint.powershell -ea 0;

$spweb = get-spweb "$Weburl";
$splist = $spweb.lists | ? { $_.Title -eq "$ListTitle" -or $_.RootFolder.Name -eq "$ListTitle" }
$splistid = $splist.id.toString("B");

$WebServiceBase = $WebUrl;
$WebServiceMethod = "_vti_bin/NintexWorkflow/Workflow.asmx";
$Method = "POST";
$ContentType = "text/xml; charset=utf-8";

$soapEnvelope = "" +
                "" +
                    "" +
                        "" + $splistid + "" +
                        "{" + $workflowid + "}" +
                        "List" +
                    "" +
                "" +
                "";

$req = [system.Net.HttpWebRequest]::Create("$WebServiceBase/$WebServiceMethod");
$req.Method = $method;
$req.ContentType = "text/xml; charset=utf-8";
$req.MaximumAutomaticRedirections = 4;
#$req.PreAuthenticate = $true;

$req.Credentials = [System.Net.CredentialCache]::DefaultCredentials;

$req.Headers.Add("SOAPAction", "http://nintex.com/DeleteWorkflow");
$encoding = new-object System.Text.UTF8Encoding
$byte1 = $encoding.GetBytes($soapEnvelope);

$req.ContentLength = $byte1.length;
$byte1.Length;
$newStream = $req.GetRequestStream();

$newStream.Write($byte1, 0, $byte1.Length);

$res = $null;
$res = $req.getresponse();
$stat = $res.statuscode;
$desc = $res.statusdescription;
        
$stat
$desc
$res

Copy List Fields, Views and Items From List to List

Today I had to recreate a SharePoint 2013 List because the old one had an error (Content Approval errored out with “Sorry something went wrong” – Null-Pointer Exception).

My first guess was to create a new list and so I did manually. Of course with a dummy Name, so I had to recreate it again. I didn’t want to get stuck having to do it a third time, so I created a little script as seen below.

The script copies list fields and adds them to the new list, then does the same with all the views and then it copies all the items (which was the initial idea) to the new list.

The Input is fairly simple. You need to specify a url to identify the web you want to perform this operation on (you could amend the script to allow providing also a target url, so you can copy the fields, views and items
across site and site collection boundaries. However you might get an issue, for site fields used in your list that do not exist on the target site collection (Publishing Infrastructure, Custom Fields. You will need to do
a bit more than just add a parameter and init another web object). Also this works well for lists, but not for document libraries. Another limitation are content types. I did not include those either.

So you see this is more of a starting point than anything else. But it does the job and it was pretty quick to write, so I thought I would share it with you.

param (
[Parameter(Mandatory=$True)]
[string] $Url,
[Parameter(Mandatory=$True)]
[string] $SourceList,
[Parameter(Mandatory=$True)]
[string] $TargetList
)

add-pssnapin microsoft.sharepoint.powershell -ea 0;

$spWeb = get-spweb $url;

$spListCollection = $spweb.Lists;

$spSourceList = $spListCollection.TryGetList($SourceList);
$spTargetList = $spListCollection.TryGetList($TargetList);

if($spSourceList) {
if($spTargetList) {
$spTargetList.EnableModeration = $true;

$spSourceFields = $spSourceList.Fields;
$spTargetFields = $spTargetList.Fields;

$spFields = new-object System.Collections.ArrayList;
foreach($field in $spSourceFields) {
if(-not ($spTargetFields.Contains($field.ID))) {
$spFields.Add($field) | Out-Null;
}
}

foreach($field in $spFields) {
if($field) {
Write-Host -ForegroundColor Yellow ("Adding field " + $field.Title + " (" + $field.InternalName + ")");
$spTargetFields.Add($field);
}
}

$spViews = new-object System.Collections.ArrayList;

$spSourceViews = $spSourceList.Views;
$spTargetViews = $spTargetList.Views;
foreach($view in $spSourceViews) {
$contains = $spTargetViews | ? { $_.Title -eq $view.Title }
if(-not ($contains)) {
$spTargetViews.Add($view.Title, $view.ViewFields.ToStringCollection(), $view.Query, $view.RowLimit, $view.Paged, $view.DefaultView);
}
}

$spTargetList.Update();

$spSourceItems = $spSourceList.Items;

foreach($item in $spSourceItems) {
if($item) {
$newItem = $spTargetList.Items.Add();
foreach($spField in $spSourceFields) {
try {
if($spField -and $spField.Hidden -ne $true -and $spField.ReadOnlyField -ne $true -and $spField.InternalName -ne "ID") {
$newItem[$spField.InternalName] = $item[$spField.InternalName];
}
} catch [Exception] { Write-Host -f Red ("Could not copy content " + $item[$spField.InternalName] + " from field " + $spField.InternalName) }
}
$newItem.Update();
#Write-Host -f Green "Item copied";
}
}
} else {
Write-Host -f Red "List $TargetList does not exist";
}
} else {
Write-Host -f Red "List $SourceList does not exist";
}

Ensuring an LDAP Claim and what that means for your SPUser Object

So I have a customer using LDAP as an authentication Provider on SharePoint 2010.

I wrote a script a couple of weeks ago, that migrates the permissions of a user from one account to another on either Farm, WebApplication, Site or Web Level (taking into consideration Site Collection Admin Permissions, Group Memberships and any ISecurableObject [Web, List, Item, Folder, Document] RoleAssignments excluding ‘Limited Access’).

The Move-SPUser only does the trick for any situation where you have an existing user object and you create a new user object and then migrate. If the user is actually using both users simultaneously Move-SPUser is not your friend.

This is the reason:

Detailed Description

The Move-SPUser cmdlet migrates user access from one domain user account to another. If an entry for the new login name already exists, the entry is marked for deletion to make way for the Migration.

source: http://technet.microsoft.com/en-us/library/ff607729(v=office.15).aspx

 

So now I have my script but the difference between ensuring an LDAP Account and an AD Claim is that with the LDAP Account you need to explicitly give the ClaimString. With the AD Account that is not the case.

LDAP ClaimString:

i:0#.f|ldapmember|firstname.lastname@mydomain.tld

AD ClaimString:

i:0#.w|domain\SAMAccountName

With both the best idea is to follow the following way of ensuring the user:

$claim = New-SPClaimsPrincipal -identity $line.Name -IdentityType “WindowsSamAccountName”;

$user = $spweb.EnsureUser($claim.ToEncodedString());

Additionally with the LDAP Claim the email property is not set. Interestingly enough the email is the Claim identifier though, so the Name-property of the SPUser Object is in this case the email. So you will want to add the following two lines:

$user.Email = $user.Name;

$user.Update();

Now you have really ensured that the user object is on the site collection in the same way!

 

 

 

Static IP? No thanks, i’ve got ftp!

So yes, there is a bit of a logical issue in the title. If I have ftp, I already have a static ip of course, which is connected to the servername, but maybe I don’t want that static ip, I want it for a different purpose and it costs me 15 EUR/ month to get it via my Internet Provider. I could start with using a service that can tunnel my requests via a static IP to my dynamic one, but I will have to register with somebody.

I thought, why can I not do the following? Trigger a timer job on my home machine, get the IP Address and store it in a file. This file I could either push via a service like dropbox (but I don’t want dropbox on my server) or I can use ftp.

I took the code from this site.

Here it is:


function UploadFTP {
param(
[string] $user,
[string] $url,
[string] $port,
[string] $pass,
[string] $localPath,
[string] $remotePath
)

# create the FtpWebRequest and configure it
$ftp = [System.Net.FtpWebRequest]::Create("ftp://" + $url + ":" + $port + "/" + $remotePath);
$ftp = [System.Net.FtpWebRequest]$ftp
$ftp.Method = [System.Net.WebRequestMethods+Ftp]::UploadFile
$ftp.Credentials = new-object System.Net.NetworkCredential($user,$pass);
$ftp.UseBinary = $true
$ftp.UsePassive = $true
# read in the file to upload as a byte array
$content = [System.IO.File]::ReadAllBytes($localPath);
$ftp.ContentLength = $content.Length
# get the request stream, and write the bytes into it
$rs = $ftp.GetRequestStream()
$rs.Write($content, 0, $content.Length)
# be sure to clean up after ourselves
$rs.Close()
$rs.Dispose()
}

function DownloadFTP {
param(
[string] $user,
[string] $url,
[string] $port,
[string] $pass,
[string] $downloadPath,
[string] $remotePath
)
# Create a FTPWebRequest
$FTPRequest = [System.Net.FtpWebRequest]::Create("ftp://" + $url + ":" + $port + "/" + $remotePath);
$FTPRequest.Credentials = New-Object System.Net.NetworkCredential($user,$pass)
$FTPRequest.Method = [System.Net.WebRequestMethods+Ftp]::DownloadFile
$FTPRequest.UseBinary = $true
$FTPRequest.KeepAlive = $false

# Send the ftp request
$FTPResponse = $FTPRequest.GetResponse()
# Get a download stream from the server response
$ResponseStream = $FTPResponse.GetResponseStream()
# Create the target file on the local system and the download buffer
$LocalFile = New-Object IO.FileStream ($downloadPath,[IO.FileMode]::Create)
[byte[]]$ReadBuffer = New-Object byte[] 1024
# Loop through the download
do {
$ReadLength = $ResponseStream.Read($ReadBuffer,0,1024)
$LocalFile.Write($ReadBuffer,0,$ReadLength)
}
while ($ReadLength -ne 0)

$LocalFile.Close();
$LocalFile.Dispose();
}

$user = "someusername"
$url = "some.ftp.server"
$port = "21";
$pass = "somepassword";
$localPath = "C:\tmp\myfile.txt";
$downloadPath = "C:\tmp\myfiledown.txt";
$remotePath = "myuploadedfile.txt";

$ip = Get-NetIPAddress | ? { $_.AddressFamily -eq "IPv4" -and $_.InterfaceAlias -eq "Ethernet"}
$ip.IPv4Address > $localPath;

UploadFTP $user $url $port $pass $localPath $remotePath
DownloadFTP $user $url $port $pass $downloadPath $remotePath

So what I am doing is defining my variables, writing my IP to my localpath and uploading that file as well as downloading it. So my PoC was with one machine. The expectation is that the downloaded file and the original file are the same. Which is true.

The eventual setup will look a bit different because I will have to get at the public ip as well as setup the job which will then upload the file. On the other side I will need the part of the script, that downloads the file.

So my use case is I want to connect to a server connected to the internet, but I don’t know the IP, because it is dynamic/ DHCP.

IIS WAMREG admin Service – Windows Event 10016 – Wictor Wilén Addendum

The reason for sharing this post today is that I had the issue described in Wictor Wilen’s Post and the solution posted there did not work for me at first. So I wanted to elaborate a little bit. 

I was pretty sure that Wictor knows his stuff, because he is an MVP and a notorious one at that, so I thought I was doing something wrong. So true.

The first wrong turn I took when I tried fixing this the first time I didn’t understand the take ownership solution he proposed. Once I figured that out and it still didn’t work I tried finding other sources, but didn’t. Enter “The benefits of Social/ Web 2.0”.

When checking the comments on Wictor’s blog I saw that a lot of others faced the same issue I did and then I saw Blair’s post with the solution… I should have known this. x86/ x64 issue yet again.

Find the full description of how I solved this below and again a special shout-out to Wictor for the solution:

This is the error you will see in Windows Event Log:


Windows Event Log

You can find the details of this error here:

To resolve this issue, you need to give permissions to the executing accounts for SharePoint, so you can either get them from the services snapin or you add the permissions via the local SharePoint/ IIS Groups.

As the security tab of the IIS WAMREG admin service is greyed out you need to give full permissions to the local administration group. To do this you need to take ownership of the following two keys:

HKEY_CLASSES_ROOT\AppID\{61738644-F196-11D0-9953-00C04FD919C1}


Registry Entry 1

and

HKCR\Wow6432Node\AppID\{61738644-F196-11D0-9953-00C04FD919C1}


Registry Entry 2

after that you will be able to edit the permissions in the permissions tab of the component services


Component Services

12-Hive, 14-Hive, 15-Hive

This article got lost in my drafts…a bit old but always useful.

Because I just keep forgetting where it is:

[InstDrive]:\Program Files\Common Files\Microsoft Shared\web server extensions\12

So in 2010 it is:

[InstDrive]:\Program Files\Common Files\Microsoft Shared\web server extensions\14

So in 2013 it is:

[InstDrive]:\Program Files\Common Files\Microsoft Shared\web server extensions\15

and the 2010 14-hive still exists.

Getting the “Sign In as different user”-option back in SP2013

Here is a great and easy to follow post on how to get your “sign in as different user” option back into the welcome control in SharePoint 2013. I can understand, why Microsoft removed it, but in reality of life it’s really still necessary for administrators. Especially if you are troubleshooting. I’ve been working with SP2013 for the last 6 months now and I am seriously missing this option. Everything else is not as easy nor fast.

http://nickgrattan.wordpress.com/2012/07/23/sign-in-as-different-user-and-sharepoint-2013/

HowTo upgrade a Site Collection with a custom site definition from SPS 2010 to SPS 2013 (e.g. NewsGator)

Here is a step-by-step guide on what can happen when you upgrade a site collection with a custom site definition like a NewsGator Social Sites Community from SharePoint 2010 to SharePoint 2013. This is interesting in the context of a SharePoint 2010 to SharePoint 2013 migration. The assumption is, that the content database has already been attached/ mounted and upgraded via test-spcontentdatabase and mount-spcontentdatabase.

The only thing I want to mention about the two basic powershell commands that are straight forward and you can read all about in all the MVP blogs and Technet is that you should install the custom site definition solutions before mounting and consider RBS, which will be a blocker, if you mount the database and do not also re-attach the filestreams accordingly. My advice: if you can should the risk, remove the RBS completely from your database, before upgrading.


Test-SPContentDatabase -name WSS_Content_DB -webapplication http://mywebapplication

Mount-SPContentDatabase "MyDatabase" -DatabaseServer "MySQLContentAlias" -WebApplication http://mywebapplication

Why is this article important or interesting?
The site definition contains the set of pages (default.aspx or any other *.aspx files) that may be used. This is nothing special – as a matter of fact the ootb (out-of-the-box) team site definition does the same thing. The difference is just that as a supported site definition in SharePoint 2013 you will not have the same trouble as with a custom site definition as the custom site definition is packaged in a solution that is not available with a clean installation of SharePoint 2013. That means, when you do not install the solution, mount the database and hit the root of the site collection, you will get a 404 return code, because the solution files are not deployed and hence the default.aspx does not exist.


404

You will however find the settings page working…


settings page

You will need to use some powershell magic, this is of course dependent on the name of your solution(s): In essence this deploys the solutions to the 14 and 15 hives of the web frontends dependent on whether they are global or web application scoped.

In my case, these were the newsgator-related solutions.


get-spsolution | ? { $_.DisplayName.startswith("newsgator.") } | % { if($_.ContainsWebApplicationResource) { install-spsolution -gacdeployment -compatibilitylevel {14,15} $_ -force -AllwebApplications} }
get-spsolution | ? { $_.DisplayName.startswith("newsgator.") } | % { if(-not $_.ContainsWebApplicationResource) { install-spsolution -gacdeployment -compatibilitylevel {14,15} $_ -force } }
get-spsolution | ? { $_.DisplayName.startswith("sharepoint.") } | % { if($_.ContainsWebApplicationResource) { install-spsolution -gacdeployment -compatibilitylevel {14,15} $_ -force -AllwebApplications} }

The site will then show up.


settings page

You have two options now. Either you click on the “Start now” link in the red notification bar to start the upgrade via UI or you do it via powershell.

If you do it via powershell make sure to use the –versionupgrade option or else it will not work.
upgrade-spsite https://all-sp13t…/sites/community3 -versionupgrade -confirm:$true

Wait for the 100.00% to be reached…


settings page

This is what your page then looks like (after upgrading). Be sure to keep in mind that I did not fully install newsgator but only installed the solutions so the error message in the web part is to be expected. If you go back to the third screen shot you will see that this is actually the same as before.


settings page

Even if the NewsGator features are turned off and the webparts removed, the solutions need to be present for the custom site definitions to work on SP2013.

After removing the solutions:


get-spsolution | ? { $_.DisplayName.startswith("newsgator.") } | % { if($_.ContainsWebApplicationResource) { uninstall-spsolution $_ -AllwebApplications -confirm:$false} }
get-spsolution | ? { $_.DisplayName.startswith("newsgator.") } | % { if(-not $_.ContainsWebApplicationResource) { uninstall-spsolution $_ -confirm:$false} }
get-spsolution | ? { $_.DisplayName.startswith("sharepoint.") } | % { if($_.ContainsWebApplicationResource) { uninstall-spsolution $_ -AllwebApplications -confirm:$false} }

this is what you see (again):


settings page

So you need to keep the solutions providing the site definition.

If you only deploy


install-spsolution -gacdeployment -compatibilitylevel {14,15} -force newsgator.blanksite.wsp
install-spsolution -gacdeployment -compatibilitylevel {14,15} -force newsgator.sitedefinitions.wsp

you will get this. So all of the dependent solutions are necessary.


settings page

So in essence this is not too complicated, but you need to make sure you put all the pieces in place before you upgrade.
I also suggest to perform a dry run to be able to solve all solutions before you upgrade your production environment. In the end it may just be as easy as Microsoft advertises, but you need to know the little pitfalls to successfully get from A to B.

Excel Spreadsheet opening in browser – “unable to process your request. Wait a few minutes and try performing this operation again”

If you ever have this issue or a similar one with Word.
Error

Try checking this folder in the 14-hive:
C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\XML

There are a couple of files to consider.

  • serverfiles.xml
  • serverfilesExcelServer.xml
  • serverfilesPerformancePoint.xml
  • serverfilesvisioserver.xml

For my issue. I had one server acting out (not behaving the same way as the other two. Somebody manually changed the content of the file ‘serverfilesExcelServer.xml’ and commented out the first Mapping element

 <ServerFiles>

     <!-- <Mapping FileExtension="xlsx" RedirectUrlTemplate="/_layouts/xlviewer.aspx?id=|0" NoGetRedirect="TRUE" CreateRedirectUrlTemplate="/_layouts/xlviewer.aspx?new=1"/> -->

    <Mapping FileExtension="xlsb" RedirectUrlTemplate="/_layouts/xlviewer.aspx?id=|0" NoGetRedirect="TRUE" CreateRedirectUrlTemplate="/_layouts/xlviewer.aspx?new=1"/>

    <Mapping FileExtension="xlsm" RedirectUrlTemplate="/_layouts/xlviewer.aspx?id=|0" NoGetRedirect="TRUE" CreateRedirectUrlTemplate="/_layouts/xlviewer.aspx?new=1"/>

</ServerFiles>

After I changed the first mapping for xlsx the three servers had the same behavior. This was a head-scratcher especially if some people go around not caring about supported changes and just make manual changes to 14-hive files. Shame on you, if you do!