Copy List Fields, Views and Items From List to List

Today I had to recreate a SharePoint 2013 List because the old one had an error (Content Approval errored out with “Sorry something went wrong” – Null-Pointer Exception).

My first guess was to create a new list and so I did manually. Of course with a dummy Name, so I had to recreate it again. I didn’t want to get stuck having to do it a third time, so I created a little script as seen below.

The script copies list fields and adds them to the new list, then does the same with all the views and then it copies all the items (which was the initial idea) to the new list.

The Input is fairly simple. You need to specify a url to identify the web you want to perform this operation on (you could amend the script to allow providing also a target url, so you can copy the fields, views and items
across site and site collection boundaries. However you might get an issue, for site fields used in your list that do not exist on the target site collection (Publishing Infrastructure, Custom Fields. You will need to do
a bit more than just add a parameter and init another web object). Also this works well for lists, but not for document libraries. Another limitation are content types. I did not include those either.

So you see this is more of a starting point than anything else. But it does the job and it was pretty quick to write, so I thought I would share it with you.

param (
[Parameter(Mandatory=$True)]
[string] $Url,
[Parameter(Mandatory=$True)]
[string] $SourceList,
[Parameter(Mandatory=$True)]
[string] $TargetList
)

add-pssnapin microsoft.sharepoint.powershell -ea 0;

$spWeb = get-spweb $url;

$spListCollection = $spweb.Lists;

$spSourceList = $spListCollection.TryGetList($SourceList);
$spTargetList = $spListCollection.TryGetList($TargetList);

if($spSourceList) {
if($spTargetList) {
$spTargetList.EnableModeration = $true;

$spSourceFields = $spSourceList.Fields;
$spTargetFields = $spTargetList.Fields;

$spFields = new-object System.Collections.ArrayList;
foreach($field in $spSourceFields) {
if(-not ($spTargetFields.Contains($field.ID))) {
$spFields.Add($field) | Out-Null;
}
}

foreach($field in $spFields) {
if($field) {
Write-Host -ForegroundColor Yellow ("Adding field " + $field.Title + " (" + $field.InternalName + ")");
$spTargetFields.Add($field);
}
}

$spViews = new-object System.Collections.ArrayList;

$spSourceViews = $spSourceList.Views;
$spTargetViews = $spTargetList.Views;
foreach($view in $spSourceViews) {
$contains = $spTargetViews | ? { $_.Title -eq $view.Title }
if(-not ($contains)) {
$spTargetViews.Add($view.Title, $view.ViewFields.ToStringCollection(), $view.Query, $view.RowLimit, $view.Paged, $view.DefaultView);
}
}

$spTargetList.Update();

$spSourceItems = $spSourceList.Items;

foreach($item in $spSourceItems) {
if($item) {
$newItem = $spTargetList.Items.Add();
foreach($spField in $spSourceFields) {
try {
if($spField -and $spField.Hidden -ne $true -and $spField.ReadOnlyField -ne $true -and $spField.InternalName -ne "ID") {
$newItem[$spField.InternalName] = $item[$spField.InternalName];
}
} catch [Exception] { Write-Host -f Red ("Could not copy content " + $item[$spField.InternalName] + " from field " + $spField.InternalName) }
}
$newItem.Update();
#Write-Host -f Green "Item copied";
}
}
} else {
Write-Host -f Red "List $TargetList does not exist";
}
} else {
Write-Host -f Red "List $SourceList does not exist";
}

Advertisements

Ensuring an LDAP Claim and what that means for your SPUser Object

So I have a customer using LDAP as an authentication Provider on SharePoint 2010.

I wrote a script a couple of weeks ago, that migrates the permissions of a user from one account to another on either Farm, WebApplication, Site or Web Level (taking into consideration Site Collection Admin Permissions, Group Memberships and any ISecurableObject [Web, List, Item, Folder, Document] RoleAssignments excluding ‘Limited Access’).

The Move-SPUser only does the trick for any situation where you have an existing user object and you create a new user object and then migrate. If the user is actually using both users simultaneously Move-SPUser is not your friend.

This is the reason:

Detailed Description

The Move-SPUser cmdlet migrates user access from one domain user account to another. If an entry for the new login name already exists, the entry is marked for deletion to make way for the Migration.

source: http://technet.microsoft.com/en-us/library/ff607729(v=office.15).aspx

 

So now I have my script but the difference between ensuring an LDAP Account and an AD Claim is that with the LDAP Account you need to explicitly give the ClaimString. With the AD Account that is not the case.

LDAP ClaimString:

i:0#.f|ldapmember|firstname.lastname@mydomain.tld

AD ClaimString:

i:0#.w|domain\SAMAccountName

With both the best idea is to follow the following way of ensuring the user:

$claim = New-SPClaimsPrincipal -identity $line.Name -IdentityType “WindowsSamAccountName”;

$user = $spweb.EnsureUser($claim.ToEncodedString());

Additionally with the LDAP Claim the email property is not set. Interestingly enough the email is the Claim identifier though, so the Name-property of the SPUser Object is in this case the email. So you will want to add the following two lines:

$user.Email = $user.Name;

$user.Update();

Now you have really ensured that the user object is on the site collection in the same way!

 

 

 

Static IP? No thanks, i’ve got ftp!

So yes, there is a bit of a logical issue in the title. If I have ftp, I already have a static ip of course, which is connected to the servername, but maybe I don’t want that static ip, I want it for a different purpose and it costs me 15 EUR/ month to get it via my Internet Provider. I could start with using a service that can tunnel my requests via a static IP to my dynamic one, but I will have to register with somebody.

I thought, why can I not do the following? Trigger a timer job on my home machine, get the IP Address and store it in a file. This file I could either push via a service like dropbox (but I don’t want dropbox on my server) or I can use ftp.

I took the code from this site.

Here it is:


function UploadFTP {
param(
[string] $user,
[string] $url,
[string] $port,
[string] $pass,
[string] $localPath,
[string] $remotePath
)

# create the FtpWebRequest and configure it
$ftp = [System.Net.FtpWebRequest]::Create("ftp://" + $url + ":" + $port + "/" + $remotePath);
$ftp = [System.Net.FtpWebRequest]$ftp
$ftp.Method = [System.Net.WebRequestMethods+Ftp]::UploadFile
$ftp.Credentials = new-object System.Net.NetworkCredential($user,$pass);
$ftp.UseBinary = $true
$ftp.UsePassive = $true
# read in the file to upload as a byte array
$content = [System.IO.File]::ReadAllBytes($localPath);
$ftp.ContentLength = $content.Length
# get the request stream, and write the bytes into it
$rs = $ftp.GetRequestStream()
$rs.Write($content, 0, $content.Length)
# be sure to clean up after ourselves
$rs.Close()
$rs.Dispose()
}

function DownloadFTP {
param(
[string] $user,
[string] $url,
[string] $port,
[string] $pass,
[string] $downloadPath,
[string] $remotePath
)
# Create a FTPWebRequest
$FTPRequest = [System.Net.FtpWebRequest]::Create("ftp://" + $url + ":" + $port + "/" + $remotePath);
$FTPRequest.Credentials = New-Object System.Net.NetworkCredential($user,$pass)
$FTPRequest.Method = [System.Net.WebRequestMethods+Ftp]::DownloadFile
$FTPRequest.UseBinary = $true
$FTPRequest.KeepAlive = $false

# Send the ftp request
$FTPResponse = $FTPRequest.GetResponse()
# Get a download stream from the server response
$ResponseStream = $FTPResponse.GetResponseStream()
# Create the target file on the local system and the download buffer
$LocalFile = New-Object IO.FileStream ($downloadPath,[IO.FileMode]::Create)
[byte[]]$ReadBuffer = New-Object byte[] 1024
# Loop through the download
do {
$ReadLength = $ResponseStream.Read($ReadBuffer,0,1024)
$LocalFile.Write($ReadBuffer,0,$ReadLength)
}
while ($ReadLength -ne 0)

$LocalFile.Close();
$LocalFile.Dispose();
}

$user = "someusername"
$url = "some.ftp.server"
$port = "21";
$pass = "somepassword";
$localPath = "C:\tmp\myfile.txt";
$downloadPath = "C:\tmp\myfiledown.txt";
$remotePath = "myuploadedfile.txt";

$ip = Get-NetIPAddress | ? { $_.AddressFamily -eq "IPv4" -and $_.InterfaceAlias -eq "Ethernet"}
$ip.IPv4Address > $localPath;

UploadFTP $user $url $port $pass $localPath $remotePath
DownloadFTP $user $url $port $pass $downloadPath $remotePath

So what I am doing is defining my variables, writing my IP to my localpath and uploading that file as well as downloading it. So my PoC was with one machine. The expectation is that the downloaded file and the original file are the same. Which is true.

The eventual setup will look a bit different because I will have to get at the public ip as well as setup the job which will then upload the file. On the other side I will need the part of the script, that downloads the file.

So my use case is I want to connect to a server connected to the internet, but I don’t know the IP, because it is dynamic/ DHCP.

IIS WAMREG admin Service – Windows Event 10016 – Wictor Wilén Addendum

The reason for sharing this post today is that I had the issue described in Wictor Wilen’s Post and the solution posted there did not work for me at first. So I wanted to elaborate a little bit. 

I was pretty sure that Wictor knows his stuff, because he is an MVP and a notorious one at that, so I thought I was doing something wrong. So true.

The first wrong turn I took when I tried fixing this the first time I didn’t understand the take ownership solution he proposed. Once I figured that out and it still didn’t work I tried finding other sources, but didn’t. Enter “The benefits of Social/ Web 2.0”.

When checking the comments on Wictor’s blog I saw that a lot of others faced the same issue I did and then I saw Blair’s post with the solution… I should have known this. x86/ x64 issue yet again.

Find the full description of how I solved this below and again a special shout-out to Wictor for the solution:

This is the error you will see in Windows Event Log:


Windows Event Log

You can find the details of this error here:

To resolve this issue, you need to give permissions to the executing accounts for SharePoint, so you can either get them from the services snapin or you add the permissions via the local SharePoint/ IIS Groups.

As the security tab of the IIS WAMREG admin service is greyed out you need to give full permissions to the local administration group. To do this you need to take ownership of the following two keys:

HKEY_CLASSES_ROOT\AppID\{61738644-F196-11D0-9953-00C04FD919C1}


Registry Entry 1

and

HKCR\Wow6432Node\AppID\{61738644-F196-11D0-9953-00C04FD919C1}


Registry Entry 2

after that you will be able to edit the permissions in the permissions tab of the component services


Component Services

AppManagement and SubscriptionSettings Services, Multiple Web Applications and SSL

So currently I am setting up four environments of which one is production, 2 are staging and another is was a playground installation.

My staging environments (TEST, QA, PROD) are multi-server, multi-farm systems (multi-farm because the 2013 Farm publishes Search and UPA to existing 2010 Farms).
They are running SPS2013 Standard with March PU 2013 + June CU 2013. They will be using App Pool Isolation and App Management and SubscriptSettings Services have their own account (svc_sp{t, q, p}_app, i.e. svc_spt_app, svc_spq_app and svc_spp_app).

I have three web applications of which all are secured by SSL making a wildcard certificate necessary for the app domain. Each has their own account (svc_sp{t, q, p}_{col, tws, upa}). The reason for this is that I will be using Kerberos Authentication and for the SPNs I need dedicated accounts for each Application URL.

My playground was once a 4 server farm, but now 3 servers have been removed. It does not run the March PU 2013 nor June CU 2013. There app pool isolation wihtout SSL is used.

On the playground the app management worked well. I actually encountered my problem on my test first and tried to replicate on the playground, but couldn’t. But I am getting ahead of myself. The system was setup via autospinstaller and the necessary certificates and IPs involved were requested and implemented. The AD Team did the domain setup for me. I didn’t setup my environment following this article, but it is a good one to read. I also got the idea of creating a separate dummy web application for attaching my IIS Bindings and Certificate from it, which makes a lot of sense, because of security considerations and kerberos.

The first article to read to get an overview of what is necessary and what’s trying to be achieved can be found here.

So I set up everything and still it wasn’t working. What does that mean? I will explain. When I subscribe to an app, download it from the store and add it in a site collection of my choosing I get to click on it once it is finished installing. The link then leads me to my app domain. With SSL only when I was using the same application pools I could actually get anywhere, otherwise I say the below.

This is what I wanted to see:
Expected

This is what I saw on any of the web applications with SSL and that had a different app pool account than the one I was using for my dummy web application.
Actual

So this blank page is actually exactly what you see when you leave the request management service running on the frontends without doing any topology configuration.

So I tried to work with the user policy from the web application management page in hopes of giving the users permissions on the content databases. This was actually not happening as I found out later, but which was actually exactly what was needed. I had to manually add the account of the app pool for the app domain to the SPDataAccess Group of the content databases. Then it also works with SSL. I actually set up three web applications WITHOUT SSL on the Test Staging Environment with the same users as the SSL Web Applications and this worked like a charm, but for any SSL web application I needed to explicitly give permissions to the content database. This is a nightmare to maintain. For my migration of 20 databases from 2010 to 2013 I need to do this again and again and for each new content database I will create in the future. Just imagine you create a new content database and forget to do this. Now for any site collection in this content database the above issue will show up. Hard to debug later on.

Not sure what Microsoft is thinking here, but I am happy that it only took me 4 days to figure this one out.

Testing Incoming Email

After all of the infrastructure blog articles the last couple of days, here now a short one for development.

When you have a dev machine and you want to test incoming email you can actually use the pickup folder in your IIS mail root folder to simulate everything after what exchange usually does for you (route the email to your SharePoint Server from outside your Server, e.g. when you write an email with your Outlook client).

So you have the pickup folder and you have the drop folder which is the one SharePoint picks up its emails via the job-email-delivery job (which runs every minute per default).

You will see that if you place a file into the pickup folder it will move quickly (in a matter of seconds) from there to the drop folder and be transformed to an eml file with a filename based on an ID.

This is the content of a text file you can use to simulate this. Take a look at the To: property. That is the email address of my list that I want the email to eventually end up in.

From:blog@yourdomain.org
To:Mailbox@dev.meiringer.com 
Subject: MySubject
This is the body of the email

So what you need to do when you are testing and you have an email queue you purge, you will want to have a folder (I call it dump) where you put your test objects you want to use as incoming emails and copy them to the pickup folder. From there you can either wait the 1 minute or if you are not as patient run the job after 3 seconds of waiting.

if ((Get-PSSnapin "Microsoft.SharePoint.PowerShell" -ErrorAction silentlyContinue) -eq $null) { Add-PSSnapin "Microsoft.SharePoint.PowerShell" }

get-childitem -Path "C:\inetpub\mailroot\Dump" -Filter *.txt | % {
    $_.FullName
    copy-item -LiteralPath $_.FullName -Destination ("C:\inetpub\mailroot\Pickup\" + $_.Name) 
}

sleep 3

$job = Get-SPTimerJob job-email-delivery;
$job.RunNow();

That pretty much does it. You now have the possibility to concentrate on your test content and let the script handle the rest. The great thing here is of course that it’s re-runnable and thus you can generate as many emails in the target list as you please.

Automation of Web Application Creation in Managed Environments (Part VII: Edit registry and host file)

This is the seventh and last article of this series. It is related to another article I already posted on the subject. This article tries to automate what was achieved in that article, i.e. edit the registry entry for BackConnectionHostNames to contain all hosted Hostnames as well as update the Local Host-File. Why would you do that? Because if you don’t then you will have a problem with calls from the server calling resources on the same server running into the loopback issue.

In essence the loopback issue is caused by a (valid!) security measure where the server should not call any resources on the system as this can mean a security loop hole. Webservers and especially webservers hosting webservices are a special case, thus you need to configure for this case and that is what you do by registering the allowed host names in the host-file and the registry.

So let’s get into the BackConnectionHostNames Registry Edit.

$path = "HKLM:\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\BackConnectionHostNames"
$root_path = "HKLM:\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0"
$name = "BackConnectionHostNames"

$value = @"
some.hostname.extension
someother.hostname.extension
athird.hostname.extension"@

if (Test-Path $path) {  
    Write ($path + " exists!")  
} else {  
    Write ($path + " needs to be created!") 
    New-ItemProperty -Path $root_path -Name $name -PropertyType MultiString -Value $value
} 

So this one is pretty easy. You can improve it by getting the multi-string content from a text file or something silly like that. So I have 3 variables at the top and these were the early days, you can see that because I omitted the semi-colon at the end of the line, which I usually do these days (except for function calls). Also the two last variables could be used to create the first, so it’s redundant, but that’s not even going to change the semantics of the script. The variable ‘value’ basically contains lines where each line represents a hostname.

Next the script checks if the path to the item property exists. It makes sure not to overwrite it, when a value exists. You could easily change this by adding the following line

Set-ItemProperty -Path $root_path -Name $name -Value $value -confirm:$false

after

Write ($path + " exists!")

The only time you will get a problem is when there already is a property with this name and this property does not have the type of a multi-valued string.

If the property does not exist it gets created with the path, name, type and value given. It’s almost as if you create a directory with PowerShell which makes sense because you can also walk through the registry the same way you do a directory of files and folders.

Now that we have set up the registry we can choose to hit “Windows + R” for “Run” and type ‘regedit’ then you navigate through the registry editor to the right place…hklm\system\CurrentControlSet\Control\Lsa\MSV1_0\ and then check the backconnectionhostnames value. It should be fine with the values given above.

Okay, the next step is getting the host file in order. You can do this manually. Honestly this is basically nothing but an ip address and a hostname separated by a tab or two at the most. Find the script below.

$inpath = ([string] (Split-Path -parent $MyInvocation.MyCommand.Definition) + "\hosts.txt")
$definitions = ([string] (Split-Path -parent $MyInvocation.MyCommand.Definition) + "\hosts_definitions.txt")

$outpath = ([string] (Split-Path -parent $MyInvocation.MyCommand.Definition) + "\hosts")

$content = get-content $inpath;

$content > $outpath 

import-csv $definitions -delimiter ';' | % {
    Write ("    " + $_.IP + "    " + $_.HostName) >> $outpath;
    Write ("    " + $_.IP + "    " + $_.HostName);
}
    
if (Test-Path $outpath) {
    copy-item $outpath "C:\windows\system32\drivers\etc\hosts" -confirm:$false -force
    Write ("Copied host-file");
    remove-item $outpath
}

The hosts.txt and the hosts_definitions.txt need to be in the directory the script is run in ($MyInvocation.MyCommand.Definition). The hosts file (without the .txt) will be copied to the same directoy and then later on moved to the correct directory. For each of the lines in the input file you can add the IP and the hostname with 4 spaces in between. Bam! Finito! Your host file is finished and ready to be sent to the correct folder. The correct folder in this case is the drivers\etc folder of the system directory. You move it and you delete the source. There you go. That’s it.

This is what the input file looks like:

IP;HostName
192.168.1.3;some.hostname.extension
192.168.1.4;someother.hostname.extension
192.168.1.5;athird.hostname.extension

and just for completeness, here is what the host file looks like when delivered with a fresh windows installation:

# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
#      102.54.94.97     rhino.acme.com          # source server
#       38.25.63.10     x.acme.com              # x client host

# localhost name resolution is handled within DNS itself.
#	127.0.0.1       localhost
#	::1             localhost

So now you have your registry and host files set up to allow the loopback check done in the right way! The only thing is that you need to make this re-runnable in some fashion or at least be able to add new host names in the future as we will see can be done with the scripts in all other parts. So using the set-propertyitem command might be a really good idea.

Additionally I want to touch on the topic of disallowing SSL 2.0 for IIS on Windows Server and automating that as well. This topic is described in another very detailed knowledge base article, so I will only briefly give you an overview of why you would want to do this. It fits well with editing the registry for the backconnectionhostname entry, because that uses the same mechanics but isn’t quite as long/ complex.

Basically this belongs into the context of security and server hardening. You will want to disallow the use of certain certificates that IIS may allow but you do not. At one of my customers a security department basically made this a requirement for our servers. So doing this on every server (because any SharePoint Prerequisite Install configures the WebServer role) of the farm is quite tedious.

The problem with these certificate types (cryptos) is that they have names containing slashes (‘/’) in them, e.g. RC2 128/128. The slash is used as a path separator in the context of powershell, so all the blogs tell you: you need to use the C# API. Okay so in my first version of the script I actually just added the first part (“RC2 128”) and manually edited the entries later (add “/128”). Yuck! So today I finally made this work in a sensible fashion. Finally I as the administrator no longer care about which cryptos are actually disabled and can only run the script and be done with this. As it’s actually fast as scripting should be it’s a couple of script executions and less than a minute for a farm. Nice!

So how did I go about this? I found this RegistryKey Class on msdn but I didn’t know how to get an object of the type [Microsoft.Win32.RegistryKey], because when do you actually have a Handle (basically what I was trying to get in the first place, so trying to find a handle is the same problem as before or an IntPtr (Int Pointer)) so I had to scan through multiple google results before finding something about RegistryKey Object and OpenSubKey. That’s where I found the constructor, which is basically a special form of the path (…prefixed “Registry::”), so that was the last missing piece to script the below:

function Get-LocalMachineRegistryKeyForEdit([string] $registryPath, [string] $SubKey) {
    $fullParentPath = "Registry::HKLM\" + $registryPath;
    if ( Test-Path $fullParentPath ) {
        $reg = [Microsoft.Win32.Registry]::LocalMachine.OpenSubKey($registryPath);
        $reg.OpenSubKey($SubKey, $true);
    }
}

function New-SubKey([Microsoft.Win32.RegistryKey] $key, [string] $subkey) {
    $fullParentPath = "Registry::" + $key.Name;
    if ( Test-Path $fullParentPath ) {
        $fullChildPath = $fullParentPath + "\" + $subkey;
        if ( -not (Test-Path $fullChildPath) ) {
            $newsubkey = $key.CreateSubKey($subkey);
        }  
    }
}

function New-PathDWORD([Microsoft.Win32.RegistryKey] $key, [string] $subkey, [string] $valueType, [string] $valueName, [string] $value) {
    $fullParentPath = "Registry::" + $key.Name;
    if ( Test-Path $fullParentPath ) {
        $fullChildPath = $fullParentPath + "\" + $subkey;
        if ( -not (Test-Path $fullChildPath) ) {
            $newsubkey = $key.CreateSubKey($subkey);
        }
        $k = Get-Item -Path ($fullChildPath)  

        if($k -ne $null) {
            $kProp = Get-ItemProperty -Path $fullChildPath -Name $valueName -erroraction silentlycontinue
            if($kProp -eq $null) {
                New-ItemProperty -Path $fullChildPath -Name $valueName -PropertyType $valueType -Value $value
            }
        }   
    }
}

# Registry::HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL
$schannelPath = "SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL";

# Registry::HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers
$ciphers = "Ciphers";

$cip = Get-LocalMachineRegistryKeyForEdit $schannelPath $ciphers

#enabled == 0
Write "SCHANNEL SECTION - Enabled=0";

New-SubKey $cip "NULL" "DWORD" "Enabled" "0"

New-PathDWORD $cip "RC2 40/128" "DWORD" "Enabled" "0"
New-PathDWORD $cip "RC2 56/128" "DWORD" "Enabled" "0"
New-PathDWORD $cip "RC4 40/128" "DWORD" "Enabled" "0"
New-PathDWORD $cip "RC4 56/128" "DWORD" "Enabled" "0"
New-PathDWORD $cip "RC4 64/128" "DWORD" "Enabled" "0"
New-PathDWORD $cip "RC2 128/128" "DWORD" "Enabled" "0"
New-PathDWORD $cip "RC4 128/128" "DWORD" "Enabled" "0"

#enabled == 1
Write "SCHANNEL SECTION - Enabled=1";

New-PathDWORD $cip "DES 56/56" "DWORD" "Enabled" "0xffffffff";
New-PathDWORD $cip "Triple DES 168/168" "DWORD" "Enabled" "0xffffffff"; # Triple DES 168/168

# ------
# Hashes

$hashes = "Hashes";
$hash = Get-LocalMachineRegistryKeyForEdit $schannelPath $hashes

Write "HASHES SECTION - Enabled=0";

#enabled == 0
New-PathDWORD $hash "MD5" "DWORD" "Enabled" "0"; # MD5

Write "HASHES SECTION - Enabled=1";

#enabled == 1
New-PathDWORD $hash "SHA" "DWORD" "Enabled" "0xffffffff"; # SHA


Write "ALGO SECTION";

$keyexchalgo = "KeyExchangeAlgorithms";
$algo = Get-LocalMachineRegistryKeyForEdit $schannelPath $keyexchalgo

New-PathDWORD $algo "PKCS" "DWORD" "Enabled" "0xffffffff"; # PKCS


Write "PROTOCOLS SECTION";

$protocolsPath = "SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols";
$protocols = "Protocols";

$prot = Get-LocalMachineRegistryKeyForEdit $schannelPath $protocols

$pct10 = "PCT 1.0";
New-SubKey $prot $pct10;  #PCT 1.0

$ssl20 = "SSL 2.0";
New-SubKey $prot $ssl20;  #PCT 1.0

$pct = Get-LocalMachineRegistryKeyForEdit $protocolsPath $pct10
New-PathDWORD $pct "Server" "DWORD" "Enabled" "0"; 

$ssl = Get-LocalMachineRegistryKeyForEdit $protocolsPath $ssl20
New-PathDWORD $ssl "Server" "DWORD" "Enabled" "0";

When all is run, this is what the registry looks like:

We basically have three methods that I will need to implement for the different use-cases I have.
I have the “give me a registry key for editing” use-case. So I get me that .NET object if it exists. Watch out that I am doing this in the HKLM (HKey Local Machine) context. I you want to do something outside of HKLM you need to change it accordingly.

So then I have my object I want to create DWORDs and Keys for. Well I need a method that does exactly that, because the first 15 or so steps will be “get me the key for editing and check if a key exists and add a DWORD enabling it or disabling it. Then I have a special case starting with the protocols section. There are actually sub-subkeys (or grandchildren keys if you so will). So we also need to be able to just create a path and then be able to perform or Path+DWORD operation again.

So the Path+DWORD is the heart of our script. Let’s look at it then. We make sure we don’t try to create something that already exists. So we have a parallel identification variable for the Test-Path method and the RegistryKey.OpenSubKey method. The subsequent OpenSubKey then opens the key for editing. I could have of course also implemented it, so that it directly opens the key I want in the first call of opensubkey and then use the flag for editing there. Might have been more slick, but how do I find my scripts on other blogs later if I don’t have any sillyness in them, right? 😉

So that’s a wrap.

I am sure you can optimize these things or maybe find easier solutions to these challenges I drew up here, but I think it was well worth writing this down as it took quite some time to find the solutions and the way I work I get inspiration for other problems I come across from reading these types of blogs.

I hope reading this article or all of them was worth your time and I would appreciate any feedback from your side if you are reading this.
I enjoyed writing this, so hopefully reading it was fun, too. My SharePoint journey isn’t over by far, so stay tuned for the next articles to come.

Back To Part VI
Back To Overview

Attachments: