SPSocialFeedManager.GetFeed: Exception: System.UriFormatException: Invalid URI: The hostname could not be parsed.

Today I fixed the second part of one of the craziest SharePoint 2013 issues of my year.

This issue to begin with only exists until January CU 2017 of SharePoint 2013. Since we are rapidly approaching 2019 I hope most of your farms are already past this update level.

However I have one customer that is still on a 2016 patch level: 15.0.4815.1000 (April 2016 CU).

As usual, we are currently in the process of updating and QA and UAT are already patched. However I was able to get my hands on a machine with the same patch level as production and was able to reproduce the issue.

The issue is extremely rare and to my shame I didn’t find it, a colleague did. Take note of the fixed issues list of KB3141477 (January CU 2016 for SharePoint Server 2013).

When you create a new newsfeed post that includes an app URL with an incorrect protocol (http:// instead of https://), the newsfeed page is broken and it doesn’t display news. This update ensures any problem in one of the news item should not block from rendering the newsfeed page.

What this means as an example: Somebody installs a sharepoint-hosted app (add-in) in your environment. Somebody copies the address of the app (https://app-%5BGUID%5D.%5Bappdomain%5D) or any link starting with this into your SharePoint 2013 microfeed. But the important part is, he/ she removes the s from the ssl-protocal before pasting. This works in the other direction as well of course (you have an app host with http:// and he/ she adds an “s” after “http”).

The occurring issue is that your user will receive a message that this didn’t work as expected “we’ve hit a snag”.

The user will click on “OK” and think he/ she is done with it. However far from the truth. Refreshing the page will show a corrupted microfeed with a cryptic message (which is not so cryptic, if you know what’s up…but more later…)

Something went wrong
SharePoint returned the following error: Invalid URI: The hostname could not be parsed. Contact your system administrator for help in resolving this problem.

So now what?

The ULS logs will tell you a bit more of course, but not sure if you will make the connection to delete the item from the microfeed list.

Request for app scheme does not match the webapp's scheme for this zone. Request Uri: http://app-a5ced8ff740661.apps-...
Request for app scheme does not match the webapp's scheme for this zone. Request Uri: http://app-a5ced8ff740661.apps-...
SPSocialFeedManager.GetFeed: Exception: System.UriFormatException: Invalid URI: The hostname could not be parsed.     at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind)...

So you see this and you don’t know what it means. Well as mentioned above it has to do with the fact that somebody added a link in the microfeed and used the wrong protocol for the app domain. So you will go to the microfeed list and see that only the farm administrator account can actually delete items there with the right permission of “Social Data” in the User Profile Service. You can use this script to delete them via powershell (if you have server access):

Add-PSSnapin Microsoft.SharePoint.Powershell -ea 0;

$site = "";
$itemIds = @();

$spweb = get-spweb $site;
$microfeed = $spweb.lists["MicroFeed"];
$itemId = $null;
foreach( $itemId in $itemIds ) {
    if( $itemId -ne $null ) {
        Write-Host ("ItemId: " + $itemId);
        $item = $null;
        $item = $microfeed.GetItemById($itemId);


Once you delete this item, you will find the feed on the site itself is fine (after a refresh!). However if in the meantime you have been on the mysite host and checked your stream and you have been following this particular site, where the person added the url you will find that feed broken as well. You might be lucky like me and it’s not. But some users might still be affected. You can stop following that room but that will probably not be a solution for users productively using SharePoint.

Why is the error still there even though the item has already been deleted? Caching. If you want to delete your browser cache and try again you will find this doesn’t help. You will then try to clear the timer cache or do an iisreset. Still no dice. What actually helps is to clear the DISTRIBUTED CACHE (appfabric).

You can do that via the following script, which targets specifically only the activity feed container:

Add-PSSnapin Microsoft.SharePoint.Powershell -ea 0;
Clear-SPDistributedCacheItem -ContainerType DistributedActivityFeedCache;

That will help. You can find more information on this command here. What didn’t help for me which was a red herring were Update-SPRepopulateMicroblogFeedCache and Update-SPRepopulateMicroblogLMTCache. Btw the required parameters for the call Update-SPRepopulateMicroblogFeedCache change between April 2016 CU and July 2018 CU.

So in summary:

If you find you have this similar issue: Don’t waste your time. Check your farm version, delete the list item, flush the cache. Plan to update to a recent update, asap. I hope two professional days of my life turn out to be just minutes for you.

Renew Certificate in Provider Hosted Apps Scenario

With a certain customer of mine I recently had an issue, where in the span of a month all of the certificates for the Provider Hosted Apps Domain (PHA) had to be renewed for four staging environments (including PROD).

I was lucky to be the successor of somebody who made the same mistake as the service provider a month later, so I was prepared and could save the day. In hopes of saving you the time it took for me to force the service provider’s hand (around 10 hours telco time) I want to give you a brief overview of how to tackle this, the full list of reference articles and a script to set the sharepoint part (trust).

First a short introduction.

Certificates. Certificates are often used to encrypt data communication between machines. This is done to make sure that two parties can communicate without a third party listening. Also this is done to verify the identity of somebody initiating communication.

In the scenario of SharePoint and PHA we have two parties. We have the PHA Server Farm and the SharePoint Server Farm. Usually each farm consists of at least 2 servers for redundancy/ high availability reasons.

When HTTP communication is done via SSL each WebSite in IIS has a binding on port 443, which uses a certificate for encrypting the data he site responds with to requests.

Any admin can swap the certificate in IIS. All you need to do is check the certificate that exists and request a new certificate either self-signed, internally trusted or externally trusted with the correct SAN (Subject Alternative Name).

As an example, let’s suggest the following setup:
SharePoint has a wildcard certificate, like *.apps.mycompany.com. The PHA environment has a certificate corresponding to this in apps.mycompany.com. This may be the same certificate, if you request the big kahuna, i.e. a multi-san, wildcard certificate. Usually this is not the case, and is not necessary.

The PHA IIS will have the apps.mycompany.com certificate, and SharePoint will have the wildcard certificate. However how does SharePoint make sure, that PHAs are not added to different server and this server has different code and pretends to be the PHA server? There is a trust between these servers on the SharePoint side. In essence this article has one message: “Don’t forget this trust!”

The underlying process of replacing the apps.mycompany.com certificate is based on four easy steps, all of them are necessary:

  1. Replace the apps.mycompany.com certificate in the IIS of each PHA server

    This is a no-brainer. Request the certificate, get the response, use certmgr.msc to import the certificate into the Personal Store of the Machine Account. Make sure to have a private key for the certificate. This can be self-signed, internally trusted or externally trusted (depending on your scenario, if you externalize your farm or not).

  2. Export the apps.mycompany.com certificate as pfx (with private key)

    Export it with private key (and password) and put it into the location, where the web.config of each Provider Hosted App can access it. Usually this certificate is stored in a central location on each IIS PHA Server.

  3. Export the apps.mycompany.com certificate as cer (without private key)

    Export it without private key and put it into a location on a SharePoint server, where you can access it from the SharePoint Powershell script in the next step.

  4. Replace the SharePoint trust via script

    The certificate (cer) is referenced in two locations in SharePoint (SPTrustedRootAuthority, STSTrustedSecurityTokenIssuer). You can set it in the SPTrustedRootAuthority by updating the object and by deleting the STSTrustedSecurityTokenIssuer object and recreating this with the correct IssuerName and RegisteredIssuerName ([Issuer GUID]@[Realm]). See Script below.

EDIT: This image differs from the code below. A crucial parameter is missing. line 29 must have the flag “-IsTrustBroker” as seen below. I wrote a specific article on this topic here

param (
[string] $CertificateSubjectAlternativeName = "apps.mycompany.com"
, [string] $CertificatePathLocation = "[MyDrive]:\[MyPath]\apps.mycompany.com.cer"

asnp microsoft.sharepoint.powershell -ea 0

$certificate = $null;
$certificate = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($CertificatePathLocation);

if($certificate -ne $null) {
$tra = $null;
$tra = Get-SPTrustedRootAuthority | ? { $_.Certificate.Subject.Contains(${CertificateSubjectAlternativeName}) }

if( $tra -ne $null ) {
$tra.Certificate = $certificate;
} else {
Write-Host -ForegroundColor Red “Error: No Certificate with SAN ‘${CertificateSubjectAlternativeName}’ found in Root Authority Store.”;

$sci = $null;
$sci = Get-SPTrustedSecurityTokenIssuer | ? { $_.SigningCertificate.Subject.Contains(${CertificateSubjectAlternativeName}) }

if( $sci -ne $null ) {
$regIssuerName = $sci.RegisteredIssuerName;
$issuerName = $sci.DisplayName;
New-SPTrustedSecurityTokenIssuer -Name “${issuerName}” -RegisteredIssuerName “${regIssuerName}” -Certificate $certificate -IsTrustBroker;
} else {
Write-Host -ForegroundColor Red “Error: No Certificate with SAN ‘${CertificateSubjectAlternativeName}’ found in Trusted Security Token Issuer Store.”;
} else {
Write-Host -ForegroundColor Red “Error: Certificate not found at location ‘${CertificatePathLocation}’.”;

The last step, which is not mandatory, but we had to do it was on the IIS Servers of the PHA environment. The certificate gets cached by the UserProfile of the User running the app pool. Thus once you replace it is no longer able to find the file. This will be broadcasted by an ugly error like: ‘CryptographicException: The system cannot find the file specified.’

This is how to fix that: open IIS –> ApplicationPools –> DefaultAppPool –> “Right Click” –> Advanced Settings –> Load User Profile | set this value to “true”.

It seems a bit absurd to change this setting since it did not have to be set when configuring the PHA connection in the first place, but it does the trick.


Read more of this post

Unexpected Response from Server when updating SharePoint ListItem via JSOM

These days I am working a lot on the client-side of things. So a couple of months ago I started writing my first lines of JavaScript/ JSOM (Javascript (Client)-Side Object Model).

I wrote a small method to create list items in a list (listTitle) based on a collection/ list of properties (properties) and their respective values. Here it is:

function createListItem(listTitle, properties) {
    var d = $.Deferred();
    try {
        var ctx = SP.ClientContext.get_current();
        var oList = ctx.get_web().get_lists().getByTitle(listTitle);

        var itemCreateInfo = new SP.ListItemCreationInformation();
        oListItem = oList.addItem(itemCreateInfo);

        for (var i = 0; i < properties.length; i++) {
            var prop = properties[i];
            oListItem.set_item(prop.Key, prop.Value);

        var o = { d: d, ListItem: oListItem, List: listTitle };
        function () {
        function (sender, args) {
            o.d.reject("Could not create list item in list " + o.List + " - " + args.get_message());
    } catch (Exception) {
    return d.promise();

function updateListItem(listTitle, properties) {
  var d = $.Deferred();
  try {
        var d = $.Deferred();
        var ctx = SP.ClientContext.get_current();
        var oList = ctx.get_web().get_lists().getByTitle(listTitle);

        oListItem = oList.getItemById(id);

        for (var i = 0; i < properties.length; i++) {
            var prop = properties[i];
            try {
                oListItem.set_item(prop.Key, prop.Value);
            } catch (Exception) {
                console.log(prop.Key + ' ' + prop.Value);
        var o = { d: d, ListItem: oListItem, List: listTitle, p: properties };
        function () {
        function (sender, args) {
            o.d.reject("Could not update list item in list " + o.List + " - " + args.get_message());
    } catch (Exception) {
  return d.promise();

So this is what happened when I had this code execute on editing another item in a different list…

When I debugged using Chrome (my browser of choice when writing JavaScript – never used it before that, interestingly…) I received the error “unexpected response from the server”.

I figured out that there are two key lines in this code that can be the cause of this.

var ctx = SP.ClientContext.get_current();



In my case the first line was actually not responsible for the error message. For your reference if you use var ctx = new SP.ClientContext(url); you may encounter this error message. So make sure to check that.

You should always use the current client context, when using JSOM, similar to the best practice guidelines for opening webs on server-side (SSOM [Server-Side Object Model]).

In my case the second line was the cause for the issue.

When creating an item I need to load the item into the context afterwards (or the error will show up even if the item is created correctly).

When updating an item the item may not be loaded into the context afterwards (or the error will show up even if the item is updated correctly).

It kind of makes sense, because when creating an item you are actually sending an SP.ListItemCreationInformation to the server. When updating an item I already have my listitem object. Why would I need to load all the other information afterwards?

So once I removed the line from the update method the code no longer evaluated to fail and the error message disappeared.

So for the experts among you this may be old news, but I actually needed to think about this for a few minutes before I figured it out, so I thought it was well worth blogging about. Especially since I haven’t blogged for quite some time.

AppManagement and SubscriptionSettings Services, Multiple Web Applications and SSL

So currently I am setting up four environments of which one is production, 2 are staging and another is was a playground installation.

My staging environments (TEST, QA, PROD) are multi-server, multi-farm systems (multi-farm because the 2013 Farm publishes Search and UPA to existing 2010 Farms).
They are running SPS2013 Standard with March PU 2013 + June CU 2013. They will be using App Pool Isolation and App Management and SubscriptSettings Services have their own account (svc_sp{t, q, p}_app, i.e. svc_spt_app, svc_spq_app and svc_spp_app).

I have three web applications of which all are secured by SSL making a wildcard certificate necessary for the app domain. Each has their own account (svc_sp{t, q, p}_{col, tws, upa}). The reason for this is that I will be using Kerberos Authentication and for the SPNs I need dedicated accounts for each Application URL.

My playground was once a 4 server farm, but now 3 servers have been removed. It does not run the March PU 2013 nor June CU 2013. There app pool isolation wihtout SSL is used.

On the playground the app management worked well. I actually encountered my problem on my test first and tried to replicate on the playground, but couldn’t. But I am getting ahead of myself. The system was setup via autospinstaller and the necessary certificates and IPs involved were requested and implemented. The AD Team did the domain setup for me. I didn’t setup my environment following this article, but it is a good one to read. I also got the idea of creating a separate dummy web application for attaching my IIS Bindings and Certificate from it, which makes a lot of sense, because of security considerations and kerberos.

The first article to read to get an overview of what is necessary and what’s trying to be achieved can be found here.

So I set up everything and still it wasn’t working. What does that mean? I will explain. When I subscribe to an app, download it from the store and add it in a site collection of my choosing I get to click on it once it is finished installing. The link then leads me to my app domain. With SSL only when I was using the same application pools I could actually get anywhere, otherwise I say the below.

This is what I wanted to see:

This is what I saw on any of the web applications with SSL and that had a different app pool account than the one I was using for my dummy web application.

So this blank page is actually exactly what you see when you leave the request management service running on the frontends without doing any topology configuration.

So I tried to work with the user policy from the web application management page in hopes of giving the users permissions on the content databases. This was actually not happening as I found out later, but which was actually exactly what was needed. I had to manually add the account of the app pool for the app domain to the SPDataAccess Group of the content databases. Then it also works with SSL. I actually set up three web applications WITHOUT SSL on the Test Staging Environment with the same users as the SSL Web Applications and this worked like a charm, but for any SSL web application I needed to explicitly give permissions to the content database. This is a nightmare to maintain. For my migration of 20 databases from 2010 to 2013 I need to do this again and again and for each new content database I will create in the future. Just imagine you create a new content database and forget to do this. Now for any site collection in this content database the above issue will show up. Hard to debug later on.

Not sure what Microsoft is thinking here, but I am happy that it only took me 4 days to figure this one out.


Okay, so I have my dev machine, and I have the typical error: please change your setting of customError in the web.config to RemoteOnly or Off so you can see the error.

I tried a lot, but this is what finally did the trick:
change the setting in the 14 hive web.config:
C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\LAYOUTS\web.config

– I checked to make sure the spelling was right (it is case-sensitive).
– I added it at different parts of the config file (no difference, you will see there is only one correct place to put it).
– Changed it from Off to RemoteOnly to Off (no difference, if you are accessing locally)
– Changed it via web.config manually and via IIS (the settings seem to be synced, but in special cases it seems they are not).

Creating a UPA Proxy via PowerShell – technet article typo


In the example it says:

$app = Get-SPServiceApplication -Name PartitionedUserProfileApplication. New-SPProfileServiceApplicationProxy -Name PartitionedUserProfileApplication_Proxy -ProfileServiceApplication $app -PartitionMode

But the Member is -ServiceApplication rather than -ProfileServiceApplication.

Tried it, worked. Keep in mind.

If they had a chance to add a comment, I would have done it there.

CheckedIn / Added events on libraries

So I had to do another event receiver for a wiki pages library. There are different versions depending on the development state – they are different in that the library either has a force checkout or not. See below image and then the lower of the two red rectangles shows the setting that is either yes or no. The other one defines if a simple approval workflow is used to manage versions.
Versioning Settings

So, what am I doing in my eventreceiver? I am adding the title, if it was not set (watch out, there is a difference between the dialog for a new page via the site settings menu and the new item in the pages library. The other thing I am doing is changing the settings of XsltListViewWebParts (change the view, set the row limit to at least 100 and filter the view using the title and a column I use called ‘page’.

These are the two interesting parts of the code:

This one sets the title. That means I need a web opened as an administrator and then check to see if the title is the way I want it to be…

So I get the list and the item and then check the title field.
if the title is null, then I set it, item.Title does not have a set-method so there we go…quick question what is faster? GetItemById, or GetItemByUniqueId? I would expect the first option…I did try it once though, and the id returned 33 while the number of items was 13 (I deleted a lot of pages) and the method returned an error, so I was playing safe here.

      private void SetTitle(SPItemEventProperties properties)
            SPListItem item = properties.ListItem;
            if (properties.Web != null)
                        using (SPSite site = new SPSite(properties.Web.Url))
                        using (SPWeb web = site.OpenWeb())
                            SPList list = web.Lists[properties.ListId];
                            SPListItem admItem = 
                            if (string.IsNullOrEmpty(
                                admItem[SPBuiltInFieldId.Title] as string)
                                title = admItem.Name.Replace(".aspx", "");
                                admItem[SPBuiltInFieldId.Title] = title;
                                title = item.Title;
                            EventFiringEnabled = false;
                            site.AllowUnsafeUpdates = true;
                            web.AllowUnsafeUpdates = true;
                            web.AllowUnsafeUpdates = false;
                            site.AllowUnsafeUpdates = false;
                            EventFiringEnabled = true;

The event firing and the updating is just fine right there. Watch for the system update, which is crucial.

Problem right now is still, that the check-in will overwrite the modified-by user that did the check-in that triggered the event, that I will solve with <a href="this.

     private void CheckView(SPWeb web, XsltListViewWebPart wp)
            Guid oGuid = new Guid(wp.ViewGuid);
            SPList list = web.Lists[wp.ListId];
            SPView oWebPartView = list.Views[oGuid];
            int limit = 100;
            bool changed = false;
            if (oWebPartView.RowLimit < limit)
                oWebPartView.RowLimit = 
                        Math.Max(limit, oWebPartView.RowLimit);
                changed = true;
            string query = "" + title + "";
                oWebPartView.Query = query;
                changed = true;        

In this method I simply take the webpart, which was already casted, then I pick up the Guid and get the view from the list. Initially I thought: what if somebody has an XsltListViewWebPart referring to a list in a sub-web…well that’s not possible – I checked – at least not ootb.

So now that I developed these two functions for the check-in event I thought: what if there is no force-checkout? Well that I still need to implement, adding an added-event and having it work only if there is a valid item to work on, because the properties.ListItem member does not exist after the added event when force-checkout is in-place.
So you got to watch that!

        private SPLimitedWebPartManager GetManagerFromItem(SPListItem item)
            if (item == null || item.File == null) return null;
            SPFile itemFile = item.File;
            // open file...
            // GetLimitedWebPartManager from SPFile object
            SPLimitedWebPartManager wpmgr = 
            return wpmgr;
        SPLimitedWebPartCollection collection = wpmngr.WebParts;
        List consumers = new List();
        foreach (var wp in collection)
          if (wp != null)
            if (wp is XsltListViewWebPart)
              XsltListViewWebPart view = (XsltListViewWebPart) wp;
              CheckView(web, view);

So what happens here is just that I get the limited webpart manager from the spfile object’s relativeurl and then iterate over the webpartcollection getting only the xsltlistviewwebparts…this is also discussed at many other places, you can google that, if you need more in-depth information.

The trouble with editing items on synchronous add-events

As I have written once or twice (see here) before, I create a lot of functionality using event receivers.

Now when you are adding new items you might want to fill field values based on either the event or external data, so you will use a synchronous adding event (SPEventReceiverType.ItemAdding). Once you get the item (SPListItem item = properties.ListItem) and you want to edit the item you should use a disable-event-firing strategy (Eventfiring) if appropriate (trigger event on edit, not insert) – this is usually appropriate when you know conditions to be false on insertion or if you already call methods from insert events that would be duplicated by the edit-event.

Now the tricky part of the editing is that if you do it wrong, you will get cryptic messages (“Microsoft.SharePoint.SPException: Save Conflict Your changes conflict with those made concurrently by another user. If you want your changes to be applied, click Back in your Web browser, refresh the page, and resubmit your changes.”). The reason for this is that the user who creates the item is not the same as the context the event receiver is running in as long as you use the standard edit-and-update-methods available.

So let’s do it the right way:

SPListItem item = properties.ListItem;
item["TheInternalFieldName"] = "TheCalculatedValue";

So this little piece of code took me a long time to figure out. Let’s check out why it’s good and then, what you could have done wrong:
The first step is basic: get the item from the properties.
The second step is editing the item. Easy! The internal field name must exist and the calculated value should be something that fitts into the column. Bamm! There you go. You could have done this by yourself so far.
Now, here is the important part: item.SystemUpdate(false).
It’s a system-user update with false as parameter.
If you check msdn you will find there are two SystemUpdate-methods. Web-Objects have the Update (non-overloaded: Update()), List-Objects have the method Update (overloaded: Update(), Update(boolean)) and items also have SystemUpdate (overloaded: SystemUpdate(), SystemUpdate(boolean)).
SystemUpdate(false) is exactly what you want as the explanation says: Updates the database with changes that are made to the list item without changing the Modified or Modified By fields, or optionally, the item version.
Okay. So you’re safe. One thing less to worry about. But why? Why? If it’s that simple why am I writing a whole article dedicated to this?
Well I have seen the error-messages. I need to reproduce them, because it has been a while since then but the essence is: usually you use update, as it is natural for lists and web and you use it without parameter, because usually you don’t have it (web) or you don’t need it (list – only when migrating data). So until you understand why SystemUpdate with boolean is important, a whole day can go by.

Basically I’m saying the same as Karine Bosch. Just found it while searching for error messages!

Defining and Connecting Information (Expiration) Policies

Recently I came across the requirement to add a retention policy (expiration policy) to a content type in the context of a contract management solution. So I created a document library and content types for different types of contracts. The content types in turn can have so called policies, i.e. policies can be attached to the content types. Once attached, these policies call a specified workflow, once a condition is fulfilled. This condition may be fulfilled once a certain date is reached, i.e. a timespan has passed since a specified date in time. The timespan may be one of these: (“years”, “months”, “days”)

The docLib, the and the cTypes that was easy. Connecting the policy was what I had to research. So let’s just assume that has been done and focus on the interesting stuff.

SPWeb web = null; // ... the web, init using either SPContext or using-statement from SPSite Object
string name = "MyContentNameInWeb"; // ... the name of the content type
string workflowName = "MyWorkflowInWeb"; // ... the name of the workflow
// this is a general method for retrieving the content type from the web by name
SPContentType cType = GetContentTypeByName(web, name);
SPWorkflowTemplate tpl = GetWorkflowByName(web, workflowName); // ... get a workflow from the web context

So the code above is plain setting-up. Get a web…I don’t care how you do it. Get the content type you want the policy to be attached to and now get ready to create / attach it:

/// Gets the customized custom data for a workflow triggering.
/// The workflow template that is subsequently triggered.
/// The time span ('days', 'months', 'years').
/// The number of timeSpans.
public static string GetCustomData(SPWorkflowTemplate tpl, String timeSpan, int numberOf)
return @"<data><formula id=""Microsoft.Office.RecordsManagement.PolicyFeatures.Expiration.Formula.BuiltIn""><number>"
+ numberOf
+ "</number><property>Created</property>"
+ "<period>"
+ timeSpan
+ "</period>"
+ @"</formula><action type=""workflow"" id="""
+ GuidFacade.GetGuidString(tpl.Id)
+ @""" /></data>";

The code you see is a method for getting an xml-string which defines when and what is triggered by the policy. Notice the formula sub-tree which specifies the condition. The id is essential here as it defines the way the information given via the xml is processed. Notice that the name is a qualified name: Microsoft.Office.RecordsManagement.PolicyFeatures.Expiration.Formula.BuiltIn. So if you want to read up on information policies on msdn that is where to start looking. The dll which contains the policy logic is prefixed with Microsoft.Office. It is not part of the standard Microsoft.SharePoint.dll. Also notice the text-element for the tag <property>. This is the field where the date is stored, that is used for evaluation if the condition is fulfilled. In this case it is the standard field, which is associated to every list (even basic custom list): ‘created’.
You can also see that the value of action type is ‘workflow’, so there are some other types of actions as well, but they are not important in this scenario.

The guidfacade gets a specially formatted guid-string (g.ToString("B").ToUpper()).

string customData = GetCustomData(tpl, "years", 5);
if (Policy.GetPolicy(cType) == null)
//if the content type hasn't got a Policy yet, create a new Policy
Policy.CreatePolicy(cType, null);

Policy policyOfContentType = Policy.GetPolicy(cType);
policyOfContentType.Name = policyName;
string policyFeatureId = “Microsoft.Office.RecordsManagement.PolicyFeatures.Expiration”;
//Add expiration policy to the content type
if (policyOfContentType.Items[policyFeatureId] == null)
policyOfContentType.Items.Add(policyFeatureId, customData);

This last part of code gets the customData from the method of the second code block and checks if a policy is associated with the content type. if not, then it is subsequently created. At this point it is important to remind ourselves that there is a difference between a content type associated to a list or docLib and a content type of a web context. Different policies may be attached to same content types associated with different objects, i.e. a basic content type that is created and therefore attached to the web context may be attached to a list or docLib and afterwards a policy may be added to either one without the policy beeing attached to the other one. That has major implications for the order of statements (attach to list, attach policy != attach policy, attach to list).

If you want the policy to be attached to the content type of the list, then you must use the content type of the list to look-up the according policy (Policy policy = Policy.GetPolicy(cTypeOfList)).

In the end check if there is an expiration policy associated already, if not then add the new expiration. In this case it’s just about vanilla if you check first, my scenario needed an add-no-edit strategy, you can also use a replace-and-give-notice strategy or an add-and-edit strategy if you like.

Installation of SharePoint 2010 beta on Windows Server 2008 R2 as standalone VM

Haven’t blogged for quite some time. Now I’m back with a post to sum up how I installed SharePoint 2010 beta. Now you might say: “Wow, he installed a Microsoft application. Good for him!”, but the truth ist, this is frickin hard. As a developer I try to keep away from installation & configuration, but as usual if nobody does it for you, you gotta do it yourself. Can’t stay behind the buzz, you know?

So here are my findings:

First thing first. The hardware requirements:
Technet: SP 2010 Hardware and software requirements

So I have a restriction myself. No lab, so I needed to use my laptop (bad idea): Lenovo T61, Intel Core Duo T7300 @2 GHz, 4 GB of RAM (effectively 3 GB) on a Windows XP SP 2 System (x86).
So the restrictions are clear. This thing is way too slow and I can’t use Hyper-V, so I need to create a standalone installation, one vm all-in-one.
If you are more lucky and have a nice setup at home similar to Andrew Connell it just might not be that much of a hastle.

If you do you can use the public hyper-v 2 vm demonstration machine then: 2010 Information Worker Demonstration Virtual Machine (Beta)

Be sure to check your machine, if it is capable of running x64 Virtual machines VMWare Check Utility and enable the VT-capability in your BIOS.

As my company is a Microsoft Partner, I got all the software I needed from here:
MSDN Subscriptions

I downloaded the following:

  • Windows Server 2008 R2 Standard, Enterprise, Datacenter, and Web (x64) – DVD (English)
  • SQL Server 2008 R2 Enterprise Evaluation November CTP (x86, x64, ia64) – DVD (English)
  • SharePoint Server 2010 Beta (x64) – (English)
  • Office Professional Plus 2010 Beta (x64) – (English)
  • Office Web Applications Beta (x64) – (English)
  • Visual Studio 2010 Ultimate RC (x86) – DVD (English) – no separate version for x64 needed
  • .NET Framework 4 Full RC (x86 and x64) – (English)
  • SharePoint Designer 2010 Beta (x64) – (English)

My VM-Ware-Image is a 35 GB allocate-on-demand image with bridged ethernet, 2 GB Ram max memory, 2 processors.

The installation of the above mentioned software takes up 29,2 GB of space on the external drive I use. You can probably tweek this because I mostly used ultimate editions and default settings.
Windows Server R2 comes with a lot of luggage, so you could probably deinstall a few apps before installing new versions as well.
Installing SQL-Server turned out to be optional as well.
Then I installed. Watch out for the sequence, because it will blow up in your face otherwise.

Step 1a: Install Windows Server 2008 R2
Step 1b: Install Offfice 2010 Beta
Step 1c: Install Visual Studio 2010 RC
Step 1d: Install SharePoint Designer 2010 Beta
Step X: Omitted: Create FarmAdmin Account (User, not Admin – priviledges as SPAdmin, WSSAdmin, SQLAdmin)
Step 2: .NET 3.5 SP 1 (activate feature, because is already installed with R2 of WinServer)
Step 3: Install SQL Server 2008
Step 4: Install SQL Server SP1
Step 5: Prerequisites SP 2010 Beta
Step 6: Deinstall Geneva Framework x86 and install Geneva Framework x64
Step 7: Install Hotfix KB 976462
Step 8: Install SharePoint Server 2010 Beta (as standalone)
Step 9: Install Office Web Applications Beta
Step 10: Run Sharepoint Configuration Wizard

I used the following blogs / pages for my installation:
A selection of people installing SharePoint 2010 beta on either Windows 7 or Windows Server 2008:
Installation Roundup
This one sounds quite interesting, but couldn’t get it to work that way, but this is where I found out, that the Geneva Framework wasn’t the right one…
Single Server Complete Install of SharePoint 2010 using local accounts
This one installed on windows 7, but it was supposed to be standalone as well, so you can take some valuable infos from here and it containes the link to the correct version of the Geneva Framework:
SharePoint 2010 Lab Environment Part 1 – Installing SharePoint on Windows 7

Here more details on each step:

Step 1a: Install Windows Server 2008 R2
Well this is straight forward. Default installation. Works like a charm, but takes quite some time.

Step 1b: Install Offfice 2010 Beta
This works fine. No problems here.

Step 1c: Install Visual Studio 2010 RC
This takes a whole lot of time but works just fine.

Step 1d: Install SharePoint Designer 2010 Beta
Haven’t even started using this thing. I’m still fed up with the prequel. Everybody is raving about what has been done about the Designer, but I’m just not in the mood yet.

Step X: Omitted: Create FarmAdmin Account (User, not Admin – priviledges as SPAdmin, WSSAdmin, SQLAdmin)
This step is advised by Microsoft of course. It is absolutely vital for a production environment. I just installed using the administrator, but every security expert will tell you that’s a hack.

Step 2: .NET 3.5 SP 1 (activate feature, because is already installed with R2 of WinServer)
This is absolutely necessary. Here is a screenshot for you to find where to do it:
Activate SP 1 of .NET 3.5 in Windows Server 2008 R2

Step 3: Install SQL Server 2008
This turned out to be optional. But it is good to have the sql server fully installed so you got the management studio built-in. Otherwise I advise you to install it. Working with SQL-Server you will want a management tool.

Step 4: Install SQL Server SP1
If it is not Service Pack 1 then be sure to install the service pack. Absolutely necessary
REBOOT. This is where I did my first snapshot.
Step 5: Prerequisites SP 2010 Beta
Takes quite some time, you need a running network connection for this, because the setup takes a few things (like Geneva Framework) from the according download sites. I tried doing this manually but it drove me crazy…so I let him do it for me in the end.

Step 6: Deinstall Geneva Framework x86 and install Geneva Framework x64
This is absolutely vital. I tried without and it didn’t work. What will happen during configuration is that you get an error while trying to create sample data. You definitely don’t want your configuration to fail.
You can get the correct version here.

Step 7: Install Hotfix KB 976462
This is the next thing that you absolutely need to do. Get it from here. I guess this exempts SharePoint authentication from this token method. That’s the error message you will get in the configuration wizard if you don’t install this.
REBOOT. This is where I did my second snapshot.

Step 8: Install SharePoint Server 2010 Beta (as standalone)
Yeah it’s just a stand-alone. Everyone says: “Don’t do it”. But I did and it worked. Drawbacks included but not identified yet. It’s just a test machine so screw it. 😉

Step 9: Install Office Web Applications Beta
I did this because I wanted to test Office Integration in the new SP 2010. I am a little disappointed.
But on the CeBIT this year I did get a chance to talk to Microsoft folks and they did tell me that Office Web is not an adequate replacement for a full Office client installation.
So how can I complain?
Here is a little screenshot for Excel, Word and Powerpoint with once office client and once office web.
Excel Document in Office WebExcel Document in Office Client
Excel. Web Apps / Office Client

Word Document in Office WebWord Document in Office Client
Word. Web Apps / Office Client

PowerPoint Document in Office WebPowerPoint Document in Office Client
Powerpoint. Web Apps / Office Client

Step 10: Run Sharepoint Configuration Wizard
Finally…if you got this thing running, and it says configuration successful in the end you got it running, dude! Don’t mind single error pages on SharePoint Central Administration or Default Port 80…take your time creating a new web application and site collection…you will see it will work. Nothing persistent.
I still have a little problem with the upload of multiple files though. That’s ugly. Gotta find out what that is…works, but shows an error message before the dialog shows.

Now that I’m done I will go back to MOSS2007 and try a little development in the area of Business Data Connectivity. For me this is the absolutely best feature in SP2010. But I believe the Web Parts for BDC Services are part of the enterprise edition. Hm. :/ Don’t know if it even makes sense to
advise a customer to get anything but the enterprise edition…with the new visio services, performance point services and the other services it seems there is a large gap between the two.