Unexpected Response from Server when updating SharePoint ListItem via JSOM

These days I am working a lot on the client-side of things. So a couple of months ago I started writing my first lines of JavaScript/ JSOM (Javascript (Client)-Side Object Model).

I wrote a small method to create list items in a list (listTitle) based on a collection/ list of properties (properties) and their respective values. Here it is:

function createListItem(listTitle, properties) {
    var d = $.Deferred();
    try {
        var ctx = SP.ClientContext.get_current();
        var oList = ctx.get_web().get_lists().getByTitle(listTitle);

        var itemCreateInfo = new SP.ListItemCreationInformation();
        oListItem = oList.addItem(itemCreateInfo);

        for (var i = 0; i < properties.length; i++) {
            var prop = properties[i];
            oListItem.set_item(prop.Key, prop.Value);
        }

        oListItem.update();
        ctx.load(oListItem);
        var o = { d: d, ListItem: oListItem, List: listTitle };
        ctx.executeQueryAsync(
        function () {
            o.d.resolve(o.ListItem);
        },
        function (sender, args) {
            o.d.reject("Could not create list item in list " + o.List + " - " + args.get_message());
        });
    } catch (Exception) {
        console.log(Exception.message);
    }
    return d.promise();
}

function updateListItem(listTitle, properties) {
  var d = $.Deferred();
  try {
        var d = $.Deferred();
        var ctx = SP.ClientContext.get_current();
        var oList = ctx.get_web().get_lists().getByTitle(listTitle);

        oListItem = oList.getItemById(id);

        for (var i = 0; i < properties.length; i++) {
            var prop = properties[i];
            try {
                oListItem.set_item(prop.Key, prop.Value);
            } catch (Exception) {
                console.log(prop.Key + ' ' + prop.Value);
                console.log(Exception.message);
            }
        }
        oListItem.update();
        //ctx.load(oListItem);
        var o = { d: d, ListItem: oListItem, List: listTitle, p: properties };
        ctx.executeQueryAsync(
            Function.createDelegate(o, 
        function () {
            o.d.resolve(o.ListItem);
        }),
        Function.createDelegate(o, 
        function (sender, args) {
            o.d.reject("Could not update list item in list " + o.List + " - " + args.get_message());
        }));
    } catch (Exception) {
        console.log(Exception.message);
    }
  return d.promise();
}

So this is what happened when I had this code execute on editing another item in a different list…

When I debugged using Chrome (my browser of choice when writing JavaScript – never used it before that, interestingly…) I received the error “unexpected response from the server”.

I figured out that there are two key lines in this code that can be the cause of this.

var ctx = SP.ClientContext.get_current();

and

ctx.load(oListItem);

In my case the first line was actually not responsible for the error message. For your reference if you use var ctx = new SP.ClientContext(url); you may encounter this error message. So make sure to check that.

You should always use the current client context, when using JSOM, similar to the best practice guidelines for opening webs on server-side (SSOM [Server-Side Object Model]).

In my case the second line was the cause for the issue.

When creating an item I need to load the item into the context afterwards (or the error will show up even if the item is created correctly).

When updating an item the item may not be loaded into the context afterwards (or the error will show up even if the item is updated correctly).

It kind of makes sense, because when creating an item you are actually sending an SP.ListItemCreationInformation to the server. When updating an item I already have my listitem object. Why would I need to load all the other information afterwards?

So once I removed the line from the update method the code no longer evaluated to fail and the error message disappeared.

So for the experts among you this may be old news, but I actually needed to think about this for a few minutes before I figured it out, so I thought it was well worth blogging about. Especially since I haven’t blogged for quite some time.

Advertisements

Send A SOAP Message to Nintex Workflow WebService – DeleteWorkflow

Yesterday I was challenged to develop a script that deletes a list workflow on 105 sites and publish it with a new name.

There is a bug within Nintex, where when you copy a site collection the GUIDs of the workflow, the list and the web are the same as in the source site. This confuses Nintex sometimes, in this case regarding conditional start. The conditional start adds an event receiver to the list and the workflow itself is synchronous, so when saving a form this takes a couple of seconds to close because the form waits for the workflow to finish. Even if the workflow is small, this will always take longer than the user expects, so we changed the start condition to always run on change, but used the condition action as first action in the workflow, so the workflow always starts (asynchronously), but ends right away if the condition is not met. So we buy performance by getting more historic Nintex Data.

So back to the task. The publishing of a workflow can be done with NWAdmin, which was my obvious choice to team up with PowerShell to run through the sites of my webapplication and to pulish the workflow. Only publishing the workflow does not help, as the GUID stays the same. We need to decouple the workflow from its history. This can be done by publishing it with a new name (Nintex Support).

The NWAdmin Tool however does not provide a method to delete a workflow. I then looked into the dreaded “using the ie-process as com.application” but the page where you can manage a workflow is really irritating from a DOM-perspective. Also the url click event triggers a javascript method with a confirm-window.

function DeleteWorkflow(sListId, sWorkflowId, sWorkflowType, bPublished) {
    if (bPublished) {
        if (!confirm(MainScript_DeleteWfConfirm))
            return;
    }
    else if ((!bPublished) && typeof (bPublished) != "undefined") {
        if (!confirm(MainScript_DeleteUnpublishedWfConfirm))
            return;
    }
    else {
        // orphaned workflows
        if (!confirm(MainScript_DeleteOrphanedWfConfirm))
            return;
    }
    ShowProgressDiv(MainScript_DeletingWfProgress);
    deletedWorkflowID = sWorkflowId;
    var oParameterNames = new Array("listId", "workflowId", "workflowType");
    if (sListId == "") {
        sListId = "{00000000-0000-0000-0000-000000000000}";
    }
    var oParameterValues = new Array(sListId, sWorkflowId, sWorkflowType);
    var callBack = function () {
        if (objHttp.readyState == 4) {
            if (CheckServerResponseIsOk()) {
                //delete the table row's for this workflow
                var tableRows = document.getElementsByTagName("TR");
                for (var i = tableRows.length - 1; i > -1; i--) {
                    if (tableRows[i].getAttribute("WfId") == deletedWorkflowID) {
                        tableRows[i].parentNode.removeChild(tableRows[i]);
                    }
                }
                SetProgressDivComplete(MainScript_WfDeleteComplete);
            }
        }
    }
    InvokeWebServiceWithCallback(sSLWorkflowWSPath, sSLWorkflowWSNamespace, "DeleteWorkflow", oParameterNames, oParameterValues, callBack);
}

As you can see there is an if-clause which sends a confirm-window in any case. So I could not use this method. But thankfully I found the last line
InvokeWebServiceWithCallback(sSLWorkflowWSPath, sSLWorkflowWSNamespace, “DeleteWorkflow”, oParameterNames, oParameterValues, callBack);

That took me on the right track.

I looked into the method, but that was the less efficient way of approaching the problem. The link to the webservice would have gotten me further (/_vti_bin/NintexWorkflow/Workflow.asmx?op=DeleteWorkflow).

img1

function InvokeWebServiceWithCallback(sServiceUrl, sServiceNamespace, sMethodName, oParameters, oParameterValues, fCallBack) {
    if (objHttp == null)
        objHttp = createXMLHttp();

    oTargetDiv = null; // prevents the onstatechange code from doing anything


    // Create the SOAP Envelope
    var strEnvelope = "" +
                "" +
                    "" +
                    "" +
                "" +
               "";

    var objXmlDoc = CreateXmlDoc(strEnvelope);

    // add the parameters
    if (oParameters != null && oParameterValues != null) {
        for (var i = 0; i < oParameters.length; i++) {
            var node = objXmlDoc.createNode(1, oParameters[i], sServiceNamespace);
            node.text = oParameterValues[i];
            objXmlDoc.selectSingleNode("/soap:Envelope/soap:Body/" + sMethodName).appendChild(node);
        }
    }

    var objXmlDocXml = null;
    if (typeof (objXmlDoc.xml) != "undefined")
        objXmlDocXml = objXmlDoc.xml; // IE
    else
        objXmlDocXml = (new XMLSerializer()).serializeToString(objXmlDoc); // Firefox, mozilla, opera

    objHttp.open("POST", sServiceUrl, true);
    objHttp.onreadystatechange = fCallBack;
    objHttp.setRequestHeader("Content-Type", "text/xml; charset=utf-8");
    objHttp.setRequestHeader("Content-Length", objXmlDocXml.length);
    if (sServiceNamespace.charAt(sServiceNamespace.length - 1) == "/")
        objHttp.setRequestHeader("SOAPAction", sServiceNamespace + sMethodName);
    else
        objHttp.setRequestHeader("SOAPAction", sServiceNamespace + "/" + sMethodName);
    objHttp.send(objXmlDocXml);
}

In any case I developed the script to run the delete workflow method via soap and that’s what I want to share with you below.

The script deletes exactly one workflow on a list in a given web based on the id. The ID of the Workflow can be retrieved from the nintex configuration database.

SELECT workflowid, workflowname
  FROM [Nintex_Config].[dbo].[PublishedWorkflows]
  where workflowname = '[Workflow A]'
  group by workflowid, workflowname

For those of you who panic when seeing/ reading SQL, you can also get the ID from the page (the link) itself, but that kind of defeats the purpose of automating the task of deletion, because you would need to go to every management page to get all ids…but I guess anybody still reading this is not panicking yet…

btw the export-workflows nwadmin command does not give you the ids of the workflows…

but if you want to get the ids in a different way you can use the following powershell:

$w = get-spweb "[WebUrl]";
$l = $w.lists["[ListTitle]"];
$l.WorkflowAssociations | select baseid, id, name
$w.Dispose();

The ID you want to use is the baseid.

Back to the SOAP Script…

I am sending the request with the default credentials…this may be something you will want to check. Check out the System.Net.NetworkCredential type, if you want to add a dedicated user to run the call with. Don’t forget the security implications… 😉

The issue I had was, that I forgot the xml header, starting with a different content-type and the real big issue: I forgot to set the action in the header. That’s the critical point. If you don’t do that you will get a 200 HTTP Response Code, but nothing will happen. After a couple of hours I was satisfied with my result. Here it is…

param (
    [string] $WebUrl = "[MyUrl]",
    [string] $ListTitle = "[MyListTitle]",
    [string] $WorkflowId = "[GUID of Workflow without parentheses]"
)


asnp microsoft.sharepoint.powershell -ea 0;

$spweb = get-spweb "$Weburl";
$splist = $spweb.lists | ? { $_.Title -eq "$ListTitle" -or $_.RootFolder.Name -eq "$ListTitle" }
$splistid = $splist.id.toString("B");

$WebServiceBase = $WebUrl;
$WebServiceMethod = "_vti_bin/NintexWorkflow/Workflow.asmx";
$Method = "POST";
$ContentType = "text/xml; charset=utf-8";

$soapEnvelope = "" +
                "" +
                    "" +
                        "" + $splistid + "" +
                        "{" + $workflowid + "}" +
                        "List" +
                    "" +
                "" +
                "";

$req = [system.Net.HttpWebRequest]::Create("$WebServiceBase/$WebServiceMethod");
$req.Method = $method;
$req.ContentType = "text/xml; charset=utf-8";
$req.MaximumAutomaticRedirections = 4;
#$req.PreAuthenticate = $true;

$req.Credentials = [System.Net.CredentialCache]::DefaultCredentials;

$req.Headers.Add("SOAPAction", "http://nintex.com/DeleteWorkflow");
$encoding = new-object System.Text.UTF8Encoding
$byte1 = $encoding.GetBytes($soapEnvelope);

$req.ContentLength = $byte1.length;
$byte1.Length;
$newStream = $req.GetRequestStream();

$newStream.Write($byte1, 0, $byte1.Length);

$res = $null;
$res = $req.getresponse();
$stat = $res.statuscode;
$desc = $res.statusdescription;
        
$stat
$desc
$res

Update Theme on all Webs of Office365

So I have been doing a small migration project for a customer moving to Office 365. Finally I have a reason to bother working on Office 365. I have to say upfront: this is not a finished platform (yet!). Great potential, though.

As usual the initial idea is something completely different than what I spent most of my time on up to now.

Care to venture a guess?…of course: design. As usual theming is a b**** because you cannot deploy a solution on Office 365.

First I tried all the usual stuff: UI, but there are about 30-40 Sites (Subsites).
CSOM/ Webservices (CSOM is using the api webservices, if I am correct…).
Too bad the SPWeb.ApplyTheme Method doesn’t work as intended. Funny how Microsoft is, the method has four parameters. If you give it 3 it will tell you: you gave me only 3, I need 4, even though Technet tells you it’s fine to pass $null values, but that’s no biggy, because you could use dummy-data, right? So if you do that and you pass 4 values you get: “Too many resources requested” or a similar message translated from German (customer wants it in 1031 – good, that Office 365 has all the lang packs).

So the result I created is a PowerShell Script combining the chocolatey goodness of SPO Management PowerShell (get all the SPO-Site Objects of your tenant) with the caramely filling of SCOM (you cannot get any Web Objects, so we use SCOM for that)…and to top it off we use sweet-old Internet Explorer as a COM Application to fill out the form for applying the theme for each of the webs while iterating.

I would have liked to do it differently. In the traditional On-Premise Shell I could have used a one-line script to get this done. I could have guessed it in the beginning, that it would be a bit more difficult, but three different paradigms to get a simple thing done like theming-automation is a bit hilarious. That doesn’t compare to anything – at all!

I would have been fine with combining Office 365 Management Shell with SCOM. Okay – I mean the Management Shell includes only about 30 Commandlets at this point. It’s pretty much only good for creating Site Collections, emptying the recycle bin (which cannot be done at the moment via UI as I write this post – meeh) and adding users to SharePoint groups.

So we all know how bad this solution is – I am not even going to try to sell this to you as a good idea. It doesn’t work reliably. But it does reduce your workload. So that’s definitely sensible. I am imagining my customer telling me: “That theme is not good enough. Let’s use a different one.” This is probably what will happen. It usually does.

So it took me about the same time to complete the script as I would have to go through all of these webs and done all of the changes manually. So I have already won. Maybe I can help somebody else this way as well, so I am sharing the script here.

Keep in mind that you need to use the admin url to connect to the SPO-Service. Check this article on how to set up your environment accordingly. Set up the SharePoint Online Management Shell Windows PowerShell environment.

You should be aware, that if you wanna do this on a SharePoint 2010 machine you will have to open your powershell or your ISE with the suffix “-v2” like “%windir%\system32\WindowsPowerShell\v1.0\PowerShell_ISE.exe” because SharePoint 2010 will not work with .NET 4.0 when you install the management shell.

So the script needs the following:
– you need a palette and you can create that using an existing palette and edit it with the
– you need the management shell for Office 365 installed
– you need an Office 365 tenant and the credentials to access (duuuh!)

The script does the following:
– iterate over all sites that are not search or mysite (use oslo.master) or the public site
– when you get the rootweb from the site url: this is where you upload the palette (theme)
– get all the webs and iterate over them
– for the next step you will need to have an ie opened and authenticated against your tenant
– apply the theme via the two forms (you may think that because the second form has its own url you can skip the first one, but this one actually creates a cache version of the theme, so you will need to fill out the first form to be able to fill out the second one successfully).
– you will also say: why does he need the sleep commands? any good script will work without, but this is not one of those. We actually have to wait for the requests to be responded. It may even take a couple of seconds more than I have in my script. The anchors I use for clicking are not going to be found from the getelementby* methods if there isnÄt enough time in between.

So here is the “masterpiece”. Drop me a comment if it does in fact help you. Apart from that it may be a stepping stone for a much nicer script in the future.

param (
[string] $LocalPalettePath = “C:\Backup\my-palette.spcolor”,
[string] $username = “myusername@mytenant”,
[string] $password = “mypassword”,
[string] $url = “mytenant-admin.url”
)

function Process-File($Context, $File, $RemoteFolder) {
Write-Output (“Uploading ” + $File.FullName);
$FileStream = New-Object IO.FileStream($File.FullName,[System.IO.FileMode]::Open)
$FileCreationInfo = New-Object Microsoft.SharePoint.Client.FileCreationInformation
$FileCreationInfo.Overwrite = $true
$FileCreationInfo.ContentStream = $FileStream
$FileCreationInfo.URL = $File.Name
$Upload = $RemoteFolder.Files.Add($FileCreationInfo)
$Context.Load($Upload)
$Context.ExecuteQuery()
}

function Upload-Palette($ctx, $web, [string] $localPalettePath) {
$ctx.Load($web);
$ctx.ExecuteQuery();

#Write-Host “Web URL is” $web.Url
Write-Host($localPalettePath)
if($web.ServerRelativeUrl -ne “/”) {
$remoteFolderRelativeUrl = $web.serverRelativeUrl + “/_catalogs/theme/15/”;
} else {
$remoteFolderRelativeUrl = “/_catalogs/theme/15/”;
}
$remoteFolder = $web.getFolderByServerRelativeUrl($remoteFolderRelativeUrl);
$ctx.Load($remoteFolder)
$ctx.ExecuteQuery();

$file = get-item $localPalettePath;
Process-File $ctx $File $RemoteFolder;
}

function CreateTheme($ie, $ctx, $web, [string] $localPalettePath, [string] $rootWebUrl, [string] $rootWebRelativeUrl) {
$ctx.Load($web);
$ctx.ExecuteQuery();

$baseUrl = $web.Url.Replace($web.ServerRelativeUrl, “”);
$relativeWebUrl = $web.ServerRelativeUrl;

$localPaletteItem = get-item $localPalettePath
$localPaletteName = $localPaletteItem.Name;

$url = $web.Url;

if($relativeWebUrl -eq “/”) {
$relativeBase = “”;
} else {
$relativeBase = $relativeWebUrl;
}
if($rootWebRelativeUrl -eq “/”) {
$relativeBaseRoot = “”;
} else {
$relativeBaseRoot = $rootWebRelativeUrl;
}

$gallery = “/_layouts/15/start.aspx#/_layouts/15/designbuilder.aspx?masterUrl={0}&themeUrl={1}&imageUrl={2}&fontSchemeUrl={3}”;

$masterUrl = $relativeBase + “/_catalogs/masterpage/seattle.master”;
$themeUrl = $relativeBaseRoot + “/_catalogs/theme/15/$localPaletteName”;
$fontUrl = $relativeBaseRoot + “/_catalogs/theme/15/SharePointPersonality.spfont”;
$imageUrl = “”;

$masterUrlEncoded = [System.Web.HttpUtility]::UrlEncode($masterUrl);
$themeUrlEncoded = [System.Web.HttpUtility]::UrlEncode($themeUrl);
$fontUrlEncoded = [System.Web.HttpUtility]::UrlEncode($fontUrl);
$imageUrlEncoded = [System.Web.HttpUtility]::UrlEncode($imageUrl);

$formUrl = $url + [string]::Format($gallery, $masterUrlEncoded, $themeUrlEncoded, $imageUrlEncoded, $fontUrlEncoded);
$ie.Navigate($formUrl);
$ie.Visible = $true;
sleep 5;

“First Form:”
$formUrl

$ieDoc = $ie.Document;

$div = $ieDoc.getElementById(“ms-designbuilder-main”)

$anchor = $div.GetElementsByTagName(“a”) | ? { $_.id -and $_.id.endswith(“btnLivePreview”) }

$anchor.Click();

sleep 2;

“Second Form:”
$ie.LocationURL

$ieDoc = $ie.Document;

sleep 15;
$anchor = $ieDoc.GetElementById(“btnOk”);

$anchor.Click();

sleep 2;

“PostLocation:”
$ie.LocationUrl;

sleep 1;
}

# Control #

$cred = New-Object -TypeName System.Management.Automation.PSCredential -argumentlist $userName, $(convertto-securestring $password -asplaintext -force)
$credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($username, $(convertto-securestring $password -asplaintext -force));

Connect-SPOService -Url $url -Credential $cred

$sposites = get-sposite | ? { -not $_.Url.Endswith(“search”) -and -not $_.Url.Contains(“-public.”) -and -not $_.Url.Contains(“-my.”) }

clear;

foreach($sposite in $sposites) {
if($sposite) {
$ctx = New-Object Microsoft.SharePoint.Client.ClientContext($sposite.Url);
$ctx.Credentials = $credentials;

$rootWeb = $ctx.Web
$childWebs = $rootWeb.Webs
$ctx.Load($rootWeb);
$ctx.ExecuteQuery();

Write-Host $rootWeb.Url;

Upload-Palette $ctx $rootWeb $localPalettePath

$app = new-object -com shell.application
$ie = $app.windows() | ? { $_.Name -eq “Internet Explorer” } | select -first 1;

CreateTheme $ie $ctx $rootWeb $localPalettePath $rootWeb.Url $rootWeb.ServerRelativeUrl;

$ctx.Load($childWebs)
$ctx.ExecuteQuery()

foreach ($childWeb in $childWebs)
{
CreateTheme $ie $ctx $childWeb $localPalettePath $rootWeb.Url $rootWeb.ServerRelativeUrl;
}
}
}

Testing Incoming Email

After all of the infrastructure blog articles the last couple of days, here now a short one for development.

When you have a dev machine and you want to test incoming email you can actually use the pickup folder in your IIS mail root folder to simulate everything after what exchange usually does for you (route the email to your SharePoint Server from outside your Server, e.g. when you write an email with your Outlook client).

So you have the pickup folder and you have the drop folder which is the one SharePoint picks up its emails via the job-email-delivery job (which runs every minute per default).

You will see that if you place a file into the pickup folder it will move quickly (in a matter of seconds) from there to the drop folder and be transformed to an eml file with a filename based on an ID.

This is the content of a text file you can use to simulate this. Take a look at the To: property. That is the email address of my list that I want the email to eventually end up in.

From:blog@yourdomain.org
To:Mailbox@dev.meiringer.com 
Subject: MySubject
This is the body of the email

So what you need to do when you are testing and you have an email queue you purge, you will want to have a folder (I call it dump) where you put your test objects you want to use as incoming emails and copy them to the pickup folder. From there you can either wait the 1 minute or if you are not as patient run the job after 3 seconds of waiting.

if ((Get-PSSnapin "Microsoft.SharePoint.PowerShell" -ErrorAction silentlyContinue) -eq $null) { Add-PSSnapin "Microsoft.SharePoint.PowerShell" }

get-childitem -Path "C:\inetpub\mailroot\Dump" -Filter *.txt | % {
    $_.FullName
    copy-item -LiteralPath $_.FullName -Destination ("C:\inetpub\mailroot\Pickup\" + $_.Name) 
}

sleep 3

$job = Get-SPTimerJob job-email-delivery;
$job.RunNow();

That pretty much does it. You now have the possibility to concentrate on your test content and let the script handle the rest. The great thing here is of course that it’s re-runnable and thus you can generate as many emails in the target list as you please.

web.config

Okay, so I have my dev machine, and I have the typical error: please change your setting of customError in the web.config to RemoteOnly or Off so you can see the error.

I tried a lot, but this is what finally did the trick:
change the setting in the 14 hive web.config:
C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\LAYOUTS\web.config

– I checked to make sure the spelling was right (it is case-sensitive).
– I added it at different parts of the config file (no difference, you will see there is only one correct place to put it).
– Changed it from Off to RemoteOnly to Off (no difference, if you are accessing locally)
– Changed it via web.config manually and via IIS (the settings seem to be synced, but in special cases it seems they are not).

Creating SharePoint 2010 UPA Properties via Powershell

So I want to say thanks to Vijai

I took his blog post as a starting point, while I had to tweak it a bit, because it threw an error for me, in the end it works a charm!

The code from this post did not work as well (the user props did not show up in the central admin management page.

Here are my comments for Vijai:

changed

$ps = $psm.GetProfileSubtype([Microsoft.Office.Server.UserProfiles.
ProfileSubtypeManager]::GetDefaultProfileName
[Microsoft.Office.Server.UserProfiles.ProfileType]::User))

into

$defaultSubType = [Microsoft.Office.Server.UserProfiles.ProfileSubtypeManager]::GetDefaultProfileName([Microsoft.Office.Server.UserProfiles.ProfileType]::User)
$ps = $psm.GetProfileSubtype($defaultSubType)

– the xml allows configuration of IsVisible, the code doesn’t reflect that (static $true).
– for anyone who is looking for IsUserEditable, this is how it would look in you code.

$profileSubTypeProp.IsUserEditable = $true;

Properties.xml
Properties.xml

CreateProperties.ps1

# Load Sharepoint SnapIn
if ((Get-PSSnapin "Microsoft.SharePoint.PowerShell" -ErrorAction SilentlyContinue) -eq $null)
{
Add-PSSnapin Microsoft.SharePoint.PowerShell
}

# Create Service Context
$site = Get-SPSite http://win-dgkr68kv601/my
$serviceContext = Get-SPServiceContext $site

$upcm = New-Object Microsoft.Office.Server.UserProfiles.UserProfileConfigManager($serviceContext);
$ppm = $upcm.ProfilePropertyManager
$cpm = $ppm.GetCoreProperties()
$ptpm = $ppm.GetProfileTypeProperties([Microsoft.Office.Server.UserProfiles.ProfileType]::User)
$psm = [Microsoft.Office.Server.UserProfiles.ProfileSubTypeManager]::Get($serviceContext)
$defaultSubType = [Microsoft.Office.Server.UserProfiles.ProfileSubtypeManager]::GetDefaultProfileName([Microsoft.Office.Server.UserProfiles.ProfileType]::User)
$ps = $psm.GetProfileSubtype($defaultSubType)

[xml]$xmlData=Get-Content "E:\Scripting\Properties.xml"

$pspm = $ps.Properties
$xmlData.UserProfileProperties.Property | ForEach-Object {
$property = $cpm.GetPropertyByName($_.Name)
if($property -eq $null)
{
$Privacy=$_.Privacy
$PrivacyPolicy=$_.PrivacyPolicy

#$property = $properties.Create($false)
$coreProp = $cpm.Create($false)

$coreProp.Name = $_.Name
$coreProp.DisplayName = $_.DisplayName
$coreProp.Description = $_.Description
$coreProp.Type = $_.Type
$coreProp.Length = $_.Length

$cpm.Add($coreProp)

$profileTypeProp = $ptpm.Create($coreProp);

$profileTypeProp.IsVisibleOnEditor = $true;
$profileTypeProp.IsVisibleOnViewer = $true;
$ptpm.Add($profileTypeProp)

$profileSubTypeProp = $pspm.Create($profileTypeProp);

# Public
# Privacy level gives visibility of users' profile properties, and other My Site content, to everyone.
# Contacts
# Privacy level limits the visibility of users' profile properties, and other My Site content, to my colleagues.
# Organization
# Privacy level limits the visibility of users' profile properties, and other My Site content, to my workgroup.
# Manager
# Privacy level limits the visibility of users' profile properties, and other My Site content, to my manager and me.
# Private
# Privacy level limits the visibility of users' profile properties, and other My Site content, to me only.
# NotSet
# Privacy level is not set.
$profileSubTypeProp.DefaultPrivacy = [Microsoft.Office.Server.UserProfiles.Privacy]::$Privacy

# Mandatory
# Makes it a requirement that the user fill in a value.
# OptIn
# Opt-in to provide a privacy policy value for a property.
# OptOut
# Opt-out from providing a privacy policy value for a property.
# Disabled
# Turns off the feature and hides all related user interface.
$profileSubTypeProp.PrivacyPolicy = [Microsoft.Office.Server.UserProfiles.PrivacyPolicy]::$PrivacyPolicy

$profileSubTypeProp.IsUserEditable = $true;

$pspm.Add($profileSubTypeProp)

write-host -f yellow $_.Name property is created successfully
}
else
{
write-host -f red $_.Name property already exists
}
}

DeleteProperties.ps1

# Load Sharepoint SnapIn
if ((Get-PSSnapin "Microsoft.SharePoint.PowerShell" -ErrorAction SilentlyContinue) -eq $null)
{
Add-PSSnapin Microsoft.SharePoint.PowerShell
}

# Create Service Context
$site = Get-SPSite http://win-dgkr68kv601/my
$serviceContext = Get-SPServiceContext $site

# Get ProfileManager, User Profiles and User Properties
$profileManager = New-Object Microsoft.Office.Server.UserProfiles.UserProfileManager($serviceContext)
$properties = $profileManager.get_Properties()
# Create new Property

[xml]$xmlData=Get-Content "E:\Scripting\Properties.xml"

$upcm = New-Object Microsoft.Office.Server.UserProfiles.UserProfileConfigManager($serviceContext)
$pdtc = $upcm.GetPropertyDataTypes()
$ppm = $upcm.ProfilePropertyManager
$cpm = $ppm.GetCoreProperties()

$xmlData.UserProfileProperties.Property | ForEach-Object {
#$property = $properties.GetPropertyByName($_.Name)
$property = $cpm.GetPropertyByName($_.Name)
if($property -ne $null)
{
$cpm.RemovePropertyByName($_.Name)
write-host -f yellow $_.Name property is deleted successfully
}
else {
write-host -f red $_.Name property does not exist
}
}

SharePoint 2010 Environments Setup

Inspired by Chris Johnson from a year back (SharePoint development environments, my guidance) I will share my experience for the solution deployment life-cycle.

As consultant, I need mobility, so

  • I prefer working on a laptop with 8GB and a dual-core with at least 2,20 GHz
  • If I have a customer image, it is hopefully Windows 7 x64 or comparable
  • I preferrably virtualize my dev environment (preference: vm-player)
  • My preferred environment is a Windows Server R2 Image with SharePoint 2010, Office 2010, Designer, Adobe Reader, Adobe IFilter, Visual Studio 2010

For any work related to the User-Profile-Service or other Domain-Centric development, I need to have a Domain Controller on that machine first. I included that on my newer images.

The customer typically has multiple environments.

For compliance and separation of concern reasons I prefer the test, pre-live and production environment setup. Test and Pre-Live should be virtualized from a cost perspective, production should be a multi-farm environment, dedicated machines, if possible, no shared-sql (actually: SN – shared-nothing). The biggest reasons are scaleability when storing large files in RBS mode and Reporting Services, that should not be on a shared environment, as well as licensing considerations.

If the customer is a global player, a multi-farm-setup should be used. Architecture of multi-farms (Technet Article about this, if you have the money and the need, 3-5 farms with distinct usage can make sense (Services, Collaboration (Light-Weight Sites), Special Use/ High Load Applications, User Profile Services, Custom Services), in some scenarios even more)

Depending on the capabilities and resources in the Infrastructure Department within IT, it can be good sense to have the following mapping for purpose of each environment:

Environment Purpose Notes
Virtual Dev Environment Develop Code, Unit Testing virtual
(Virtual Integration Environment) (Integration Testing) Not absolutely necessary, virtual
Testing System Test/ Integration Testing virtual, same patch path as production, same solutions as production, same web applications and site collections as production
Pre-Live User Acceptancy Testing virtual, same patch path as production, same solutions as production, same web applications and site collections as production, once signed-off by IT-Project-Management, a solution that is in testing, may be deployed here
Production Live Application, only bothered by System and Application Updates, Data Migration preferrably native, once signed-off by Business-Project-Management, a solution that is in pre-production may be deployed here