Thursday, November 05, 2015

Script to check databases upgrade status

Ever wonder how the upgrade is doing?
This script checks the status of all Sharepoint-databases.


Add-PSSnapin microsoft.sharepoint.powershell


$alldatabases  = get-spdatabase

write-output "Status for all databases"

$alldatabases |Sort-Object needsupgrade |select name,webapplication, canupgrade,needsupgrade |format-table -autosize

write-output "Databases needs upgrade: $(($alldatabases |where-object {$_.needsupgrade -like "true"}).count)"

write-output "Timestamp: $(get-date -format "yyyy-MM-dd.HH:mm")"

Tuesday, September 29, 2015

Speedier scripts

Some notes on how to handle output from scripts.
I did some comparing on how much extra time is added on executiontime just by using write-output to display information to users.
While running script below these are the results:

With write-output uncommented : 11,8s
With write-progress uncommented: 3,9s
With no output: 0,9s

$global:varNumb = 1

function foo {

 #write-host "entering function 'foo'"

 write-verbose "entering function 'foo'"


 #write-host "completed function 'foo' $varnumb"

 write-verbose "completed function 'foo' $global:varnumb"



$startTime = get-date

write-host "script started at $startime"

$cycles = 1000

$i = 1

1..$cycles | foreach {

 #write-progress -Activity "Processing..." -CurrentOperation "Working on $i"

 #write-output "Testar $i"





$endtime = get-date

write-host "Done in $(($endtime-$starttime).totalseconds) seconds"




Its amazing how much time is added just by adding write-output.
But as seen in this example write-progress is prefererred, if anything.
I suggest using write-verbose instead of write-output and changing $verbosepreference = Continue when needed. This is by default SilentlyContinue



Thursday, September 17, 2015

List installed Sp2013 updates

Get list of installed updates for Sharepoint 2013.
If languagepacks are installed these parentdisplaynames will need to be added with –or

$myfilename = "InstalledHotFixes.htm"
#settings end
$outputfile = "$(split-path -Parent $myinvocation.MyCommand.Definition)\$myfilename"

$items = Get-ItemProperty HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\*
$spitems = $items |where-object {$_.parentdisplayname -like "Microsoft Sharepoint Server 2013"}
$spitems |select displayname,installdate,urlinfoabout,parentdisplayname |format-table -AutoSize
$spitems |select displayname,installdate,urlinfoabout,parentdisplayname | ConvertTo-html|Out-File $outputfile

write-output "Total patches: $($spitems.count)"
Invoke-Expression $outputfile


Wednesday, September 02, 2015

Disturbed Cache

The Scenario
Sharepoint 2013 developed publishing site using adfs. AppFabric with CU5.

The problem
In a developed Sharepoint Site we were seeing a lot of strange issues.
Sometimes some requests weren’t finished, some things might not get loaded.

The troubleshooting
It was very hard to troubleshoot since there wasn’t any patterns to the errors initially.
In an attempt mitigate the issue we raised the cookie-length for the adfssessions from default 60min to 720.
This gave us even more issues, now the frequency of the errors were more often. While navigating for 10min an issue usually occurred twice.
When monitoring the connections with fiddler while browsing we noticed a lot of requests going to the adfs-server. When really it should only happens twice a day.
I ran a search for errors in the ULS log for same period and noticed some errors relating to tokens
Common errors were:
Token Cache: reverting to local cache to add the token….
Unexpected Exception in SPDistributedCachePointerWrapper::InitializeDataCacheFactory for usage ‘DistributedLogonTokenCache’…. Request time out
And some others in dump below

Now obviously I’ve dangled with the cache before so really not surprised the nasty distributed cache is involved in this mess.

Ok, so now we’ve learned that timeouts occur on Distributed Caches Logon Token Cache. Who do we fix it?
Luckily the error message actually contains a part of the solution. Blablabla … MaxBuffersize must be greater or equal….
To actually find these errors I used my LogSearcher
Add-PSSnapin microsoft.sharepoint.powershell
$errorcategory = "DistributedCache"
$string = "*token*"
# calculates hours forward and backward.
$endtime = (get-date).addhours(1)
$starttime = (get-date).addhours(-2)
$events = Get-SPLogEvent -StartTime $starttime -EndTime $endtime |where-object {$_.message -like $string -and $_.category -like $errorcategory }
$events |select timestamp,level,area, message,category | format-table -wrap
write-output "Total: "($events |measure).count

To check servers timeout-settings
$sts = Get-SPSecurityTokenServiceConfig
write-host "SecurityTokenServiceconfig settings: "
$sts |select cookielifetime, logontokencacheexpirationwindow,WindowsTokenLifetime,MaxServiceTokenCacheItems,MaxLogonTokenCacheItems,MaxServiceTokenOptimisticCacheItems |format-list
write-host "DistributedLogonTokenCache settings:"
$dltc = Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache
$dltc | select maxbufferpoolsize,maxbuffersize,requesttimeout,channelopentimeout,maxconnectionstoserver

The solution
1. Increase timeouts on DistributedLogonTokenCache and items on SecurityTokenServiceconfig
$sts = Get-SPSecurityTokenServiceConfig
$dltc = Get-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache
$dltc.MaxBufferPoolSize = "1073741824"
$dltc.MaxBufferSize = "33554432"
$dltc.RequestTimeout = "3000"
$dltc.ChannelOpenTimeOut = "3000"
Set-SPDistributedCacheClientSetting -ContainerType DistributedLogonTokenCache $dltc
$sts.MaxServiceTokenCacheItems = "1500"
$sts.MaxLogonTokenCacheItems = "1500"

2. Stop all distributed cache services gracefully
Stop-SPDistributedCacheServiceInstance -graceful

Make sure to wait 5-10 minutes between first cache restart to the last. This so the cache actually have time to reconnect to the cachecluster and make friends with other caches. Note that cache need to be restarted from Manage Services after above command.

3. Run IISreset on all hosts.

These errors started occurring pretty might day one in the environment. Though it was when users actually started using the site as the problem became apparent.
The SecurityTokenServiceConfig doesn’t seem to be used the same way when using Windows Authentication. This made it impossible for the developers to troubleshoot in their environment without having their own adfs-solution.
Now a part of the problem was solved only by increasing cacheitems,channelopentimeout and requesttimeout. I would say that these settings solved the exceptions and fails.
While maxbuffersize and maxbufferpoolsize solved the Failed to add/get token errors below

First part I assume was due to the many API-connections that happened on the starter pages, big page with ajax-loading a lot of the components with rest. These might have maxed out the ItemsCache or it was due to the fact that tokens didn’t get saved in the cache properly or couldn’t be acquired in time.
Second part too many claims at a time, or big or improperly formatted.
But Im just guessing here.


Friday, July 10, 2015

Newsfeed acting up

When mounting a production mysite to a dev-environment I got some exciting problems with the newsfeedpage.

Posting new items worked fine. No errors, but when trying to comment other users threads following errors was dropped in our laps

Using the excellent Microsoft error translater I got this
Svenskt: Detta kunde inte posta eftersom vi har lite problem för tillfället.
English: This couldn't be posted because we're having some issues at the moment.
Same time this event 8306 shows in eventlog on server
After some digging in the ULS-logs I found this
STS Call Claims Saml: Problem getting output claims identity. Exception: 'System.InvalidOperationException: GetUserProfileByPropertyValue: Multiple User Profiles
Ok, so maybe profile problem. I check my profiledb and fine there is two profiles for my account since Im using both claims and windows in my testenvironment. I remove the one not used and try to reply to a post again.
I’m awarded with a new error
So now we’re getting somewhere!
Translating stupid swedish server…
Svenska: Det gick inte att kontrollera rekursionen.
English: Recursion check failed.

Found excellent post which solved by problems.
So I trow away my FeedIdentifier for my troubled users and re-access mysite using said user  and now It works!

Update FeedIdentifier for problem users by deleting value for Feed Service Provider Defined Identifer and login on users mysite to recreate.
Its also possible to recreate string using new mysites siteID. Code on post below.

Mysites are not to thrown around easily Especially between farms, even if they are in the same domain. All sites are have unique siteids. Had we also restored the user profile service we probably wouldn’t have this issue, but then that is a whole lot extra work. Therefore I now empty this Feed Identifiers with scripts when attaching mysites from production for selected users to make everything shine.

References: – solution and extra code for fix – localized error messages

Tuesday, July 07, 2015

IIS headers and powershell

Scenario: Have old IIS-site on a different server, need to get all headers for a site to new IIS-site on another server. I could’t find any easy way to do it via gui, but scripting was pretty easy using the builtin IIS cmdlets.
So export all headers to textfile, edit that one if you need, and the import on new server.

$iisSite = "mysite"
$path = "C:\Scripts\IIS-HostHeaders\hostheaders_mysite.csv"
$mybindings = Get-WebBinding -Name $iisSite | select protocol,bindinginformation
foreach ($binding in $mybindings)
write-output "Adding protocol $($binding.protocol) with binding $($binding.bindinginformation)"
$mybindings | export-csv -Path $path -NoTypeInformation


$myfilename = "hostheaders.csv"
$site = "myNewSite"
$importfile = "$(Split-Path -Parent $MyInvocation.MyCommand.Definition)\$myfilename"

import-csv $importfile -Delimiter "," |ForEach-Object {
try {
$hostheader = $_.bindinginformation | %{$_.split(':')[-1]}
New-WebBinding -Name $site -Protocol $_.protocol -HostHeader $hostheader
write-host -foregroundcolor green "Added $hostheader to $site with protocol $($_.protocol)"
catch {write-host -foreground yellow "Couldn't do it. Already there?" }


User Profile Service –what I learned so far

Sharepoint huh? It’s never easy.
I’ve been looking hard at User Profile Service lately for a variety of reasons, this is what I’ve learned. Use caution and test locally before using these, there’s always the risk of wiping the mysitedb. But if site hasn’t been heavily used, whats there to loose.
So how does it all come together? These are our keyplayers:
Component Description
Sharepoint Profile Synchronization Uses Forefront Identity Manager for syncing AD. The Old solution, the syncdb often messes things up. Though it’s the only solution if you need to write changes to AD, like profile pictures.
Sharepoint Active Directory Import Uses Dirsync to import AD. Fast but can only read.
User Profile Service Application Handles all our specifics. This Service can be recreated and still keep the information if databases if these are not deleted.
User Profile Service Synchronization Service This Server Service must be running to make changes in the UPSA. When it runs, it creates local certificates that muddies the local certificate store. If the service is stubborn, the local certificates may be removed, they will be recreated.
Microsoft Forefront Synchronization Manager C:\Program Files\Microsoft Office Servers\15.0\Synchronization Service\UIShell\miisclient.exe – This software is useful for determining whats goes wrong with the AD-connection. Its only accessible after you actually got UPSA running. You can use Metaverse Search to verify the AD-changes are coming through the connection.
Timerjob User Profile Service Application ProfSync Also known as User Profile to Sharepoint Full Synchronization Job – This handles the sync from the ProfilesDB to the Site Collections User information list. Runs every hour per default.
Timerjob User Profile Service Application_Sweepsync This handles sync from profilesdb to site collections User information list incrementellay. Runs every five minutes per default.
Timerjob My Site Cleanup Job This handles deletion of profiles marked for deletion. Usually when profiles are removed from User Profile Service. It also removes obsolete user. Mysites that are assigned to deleted user is assigned to their manager and notification is sent

Symtoms: User Profile Synchronization Service stuck on starting. Without it, no AD-connection can be created.
Common Solutions: - Verify service is running with spfarm-account
- Verify spfarm is local administrator on AppServer
- Stop Service and try to start again.
$ups = get-spserviceinstance |where-object {$_.typename -like "User Profile Synchronization Service" -and $_.server -like "*$env:computername*"}
$ups |select id,typename,status,server
Stop-SPServiceInstance -Identity $ups.Id -Confirm:$false

- Remove all ForefrontIdentityManager certificates from local certificate store and services Forefront Identity Manager Service and Synchronization Service.
These will be recreated each time the service restarts.
- Empty farmcache.
          - Stop TimerService on localserver,
           - delete all files except cache.ini in C:\ProgramData\Microsoft\SharePoint\Config\ {guid}(folder containing cache.ini)
          - Change cache.ini to value 1
           - start TimerService
Symtoms: Something is off with the running sync. For example, changes in AD not replicating, when they have done so before.
Common Solution: Recreate User Profile Service Application
- Gather all information you need to recreate the service,
- Databasenames
- Permissions for User Profile Service (Centadmin>Manage User Profiles>People>Manage User Permissions)
- Administrators on User Profile Service Application(UPSA), Permissions on UPSA
- Special permissions levels, Site Naming format, Security Trimming Options on My Site Settings in UPSA
- Active Directory Synchronization Connections (OU, accounts for connecting), Synchronization settings
When recreating UPSA with old databases, the SyncDB have to be removed manually or use a new name. The SyncDB is staging area between ProfileDB and FIM-AD-Sync. Basicly whay miis looks into to see how it all went. SocialDB contains all likes and social functions.  
Symptoms: Cant access the User Profile Service Application. Correlation id shows : This User Profile Application's connection is currently not available. The Application Pool or User Profile Service may not have been started.
Common Solution:
- Restart or start User Profile Service and User profile Synchronization Service. Order: stop UPS, then UPSS, start UPS then UPSS.
- Recreate proxy for Service Application and make sure proxy is connected to Default Proxy group or whatever group is used.
$proxy = get-spserviceapplicationproxy | Where-Object {$_.typename -eq "User Profile Service Application Proxy"}
$newproxyname = $
write-host "Removing proxy..."
Remove-SPServiceApplicationProxy -Identity $proxy -Confirm:$false
$upa = get-spserviceapplication |Where-Object {$_.typename -eq "User Profile Service Application"}
write-host "Adding proxy..."
$newproxy = New-SPProfileServiceApplicationProxy -name "User Profile Service Application" -Uri $upa.uri.AbsoluteUri
$defaultproxygroup = Get-SPServiceApplicationProxyGroup -Default
Add-SPServiceApplicationProxyGroupMember -Identity $defaultproxygroup -Member $newproxy

Symptoms: Users are not syncing from AD or SyncDB to profiledb
- Checking FIM Sync from C:\Program Files\Microsoft Office Servers\14.0\Synchronization Service\UIShell\miisclient.exe shows that sync is working from the AD to SyncDB
- Checking ContentDBs shows sync is occurring between ProfileDB to UserLists
foreach($db in Get-SPContentDatabase){$db.Name+" - "+$db.LastProfileSyncTime} - Checking Timerjobs shows sync is running
$TimerFullSync = get-sptimerjob | where-object {$ -eq "User Profile Service Application_ProfSync"}
$TimerQuickSync = get-sptimerjob | where-object {$ -eq "User Profile Service Application_SweepSync"}
$TimerFullSync,$timerQuickSync |select name,Jobdisplayname,lastruntime,description |format-table -wrap

Common Solution:- Kill Connection and restart sync – This is useful when User Profile Service and Site Collections don’t update properly. These should get updated with User Profile to Sharepoint Full Sync and QuickSync. Check with Listoldatabase first to see if time seems old.
set-location "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\BIN"
#stsadm.exe -o sync -listolddatabases 0
stsadm.exe -o sync -deleteolddatabases 0
$TimerFullSync = get-sptimerjob | where-object {$ -eq "User Profile Service Application_ProfSync"}

References: - - permissions for sync

Monday, July 06, 2015

csv-loop snippet


This example reads from mycsv.csv in same folder as script is.
$myfilename = "mycsv.csv"

#settings end
$importfile = "$(split-path -Parent $myinvocation.MyCommand.Definition)\$myfilename"
import-csv $importfile -Delimiter "," |foreach-object {
#All things here will execute towards every line of $myfilename
write-host "$($_.fullpath) , $($ , $($_.file)"


Wednesday, July 01, 2015

Problems with updating feeds

Issue when running Update-SPRepopulateMicroblogFeedCache on Sharepoint 2013.

Add-PSSnapin microsoft.sharepoint.powershell
$accountname = "mydomain\spinstall"
$appProxy = Get-SPServiceApplicationProxy | where {$_.typename -eq "User profile service application Proxy"}
#$appProxy |format-table -AutoSize
Update-SPRepopulateMicroblogLMTCache -ProfileServiceApplicationProxy $appProxy
sleep -Seconds 30
Update-SPRepopulateMicroblogFeedCache -ProfileServiceApplicationProxy $appProxy -AccountName $accountname

Got this puppy:
When checking permissions on UPS with:

$USPA = Get-SpServiceapplication | Where-Object {$_.TypeName -eq "User Profile Service Application"}
$sec = Get-SPServiceApplicationSecurity $USPA

I noticed my spinstall account missing.
Added with nice found code

$USPA = Get-SpServiceapplication | Where-Object {$_.TypeName -eq "User Profile Service Application"}
$sec = Get-SPServiceApplicationSecurity $USPA
$account = New-SPClaimsPrincipal "mydomain\spinstall" -IdentityType WindowsSamAccountName
$sec = Get-SPServiceApplicationSecurity $USPA
Grant-SPObjectSecurity $sec -Principal $account -Rights "Full Control"
Set-SPServiceApplicationSecurity -Identity $USPA -ObjectSecurity $sec

Finally the repopulate cmdlet ran successfully.

Note though, this should really only be necessary if mysite has been restored or remounted (like earlier version). Possible if Distributed Cache has been shutdown ungracefully.


Friday, June 26, 2015

Exclude checkin comments

I had a request to hide checkin comments from searchresults in Sharepoint 2013. I decided to hide the column from the index.

This can be done from UI
  1. Central Administration>Search Service>Search Schema > Crawled Properties> Ows__checkinComment
  2. Uncheck Include in full-text index
  3. Full crawl

Or with powershell

Add-PSSnapin microsoft.sharepoint.powershell
$name = "ows__CheckinComment"

$searchapp = get-spenterprisesearchserviceapplication

$crawledprop = Get-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -Name $name
$crawledprop.IsMappedToContents = $false
And full crawl

Add-PSSnapin microsoft.sharepoint.powershell

$contents = get-spenterprisesearchcrawlcontentsource -SearchApplication "Search Service Application"

foreach ($content in $contents)
if ($content.crawlstatus -eq "Idle")
write-host "Idle, running crawl on $($"
else {write-host "$($ is not idle or on nogo-list" }


One thing to keep in mind is if CheckInComments column is visible on any Views on the site, that view will be returned in results.


Tuesday, May 05, 2015

Versioning in Sharepoint

What happens with versions when restored in an list?

If this is the starting position:

Version 4.1 is the latest minor version that also is published. When we restore version 3.0 what happens is this:

Version 3.0 is copied as version 4.2 and is now the current version.
If we change our mind and decide that wasn’t a good one, we can restore previous version 4.1 to get back our starting position.

Results may vary depending on if minor/major versons enabled.

If we want to compare versions when working with documents, we do this from within word itself under Info>Manage versions

Monday, March 09, 2015

Upload files to library

Simple script to upload all files from specific folder to documentlibrary.


$path = "C:\myfiles\TestDocs";

$user = "mydomain\john.doe"

$pass= "mysecretpassword"

$destination = "";



$securePasssword = ConvertTo-SecureString $pass -AsPlainText -Force;

$credentials = New-Object System.Management.Automation.PSCredential ($user, $securePasssword);

#$credentials = [System.Net.CredentialCache]::DefaultCredentials;


$webclient = New-Object System.Net.WebClient;

$webclient.Credentials = $credentials;


Get-ChildItem $path | Where-Object {$_.Length -gt 0} | ForEach-Object { $webclient.UploadFile($destination + "/" + $_.Name, "PUT", $_.FullName)};


Thursday, February 26, 2015

xls tries to open with excel services when searching

I had an issue with excel documents trying to open with Excel Service when using search. Although Excel Services wasn’t even configured on the Sharepoint Server.
When looking at the Site level, feature for always open with clientprogram was activated. Same thing at the list-level, Server default was activated.
Workaround Solution:
When searching, check for preferences at the bottom
Activate Open in the browser. This will probably only solve for the current user, but if in a tight spot.

Retrieve latest timerjobs

To get a quick overview over latest timerruns and get rid of scrolling endlessly in Central Admin
Add-PSSnapin microsoft.sharepoint.powershell
$number= 10 #total results
$timername = "User Profile Service Application_LMTRepopulationJob"
$timerjob = Get-SPTimerJob $timername
$timerjob.HistoryEntries | select jobdefinitiontitle,starttime,endtime,status,errormessage -first $number|format-table

Thursday, February 19, 2015

Overall steps to move from Active Directory to ADFS authentication in Sharepoint 2013

This is for birds-eye perspective. I found it difficult to find resources that outlined the procedure. This omits everything that happens on the ADFS-server and focuses on sharepoint-parts.

1. Backup all contentdatabases, take snapshots of servers
2. Add relying party identifiers on ADFS-server. Add endpoints with ws_federation for all web applications that are going to use ADFS.
3. Add identity provider on sharepointserver (PS1.ConfigSPIdentifier)
4. Activate wsreply on sharepointserver
5. Add Identity provider to relevant web applications
Authentication Provider > Claims Authentication Types > Trusted Identity Provider
6. Configure new User Profiler Synchronization, use Authentication Provider type = Trusted Claims > your adfs
Configure User properties >
email = mail
Claim User Identifier = mail
Run full sync
7. Convert all users on webapplications. Don’t convert searchaccounts, don’t convert authenticated users. Change it to for example domain users
Use move-spuser
8. Change SuperUser and Super Reader accounts to adfs
Run change cachereaders on each webbapp and change accounts on User Policy on webapps.
9. Change loginpage for ADFS (optional, but enables automatic signin for directory users) (also enabled crawl to run as AD-user)
Copy autologin.aspx to common files\template….
Change in autologin.aspx to use current ADFS-provider
Change CentralAdministration> authpolicy > default signin page = autologin.aspx
10. Check search engine so everything works.
11. Install LDAPCP to solve peoplepicker issues.
12. Hide AD from selectionlist
13. Done.


Issues that occurred:
Couldn’t view any others mysites – solution add / permissions for all users.

Otherwise it worked quite nicely. 6000+ plus users migrated. This solution used email as primary claim. This then required the email field in AD to be populated with unique value.

Do I need to say that I take no responsbility for when this goes sideways? I don’t. But it worked for me.


$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2("C:\xss\ADFS-Token-Signing.cer")

New-SPTrustedRootAuthority -Name "Token Signing Cert" -Certificate $cert

$map = New-SPClaimTypeMapping -IncomingClaimType "" -IncomingClaimTypeDisplayName "upn" -SameAsIncoming

$map2 = New-SPClaimTypeMapping -IncomingClaimType "" -IncomingClaimTypeDisplayName "Role" -SameAsIncoming

$map3 = New-SPClaimTypeMapping -IncomingClaimType "" -IncomingClaimTypeDisplayName "EmailAddress" -SameAsIncoming

$realm = "urn:test-intranet:sharepoint"

$ap = New-SPTrustedIdentityTokenIssuer -Name "" -Description "STS-IP" -realm $realm -ImportTrustCertificate $cert -ClaimsMappings $map,$map2,$map3 -SignInUrl "" -IdentifierClaim


$tit = Get-SPTrustedIdentityTokenIssuer
$tit.UseWReplyParameter = $true

$cpm = Get-SPClaimProviderManager
$ad = get-spclaimprovider -identity "AD"
$ad.IsVisible = $false

References: - Explains sliding sessions - explains adfs token – good information on authentication process – wsreply - ADFS Sharepoint 2013 Skip authentication Provider Page – autologin.aspx

Wednesday, February 11, 2015

Workflow Manager installation

Sharepoint 2013 farm needs Sharepoint 2013 workflow support.


  1. Install webplattforminstaller on computer with internetacess. Go to c:\program files\microsoft\web plattform installer\ with cmd
  2. Run webpicmd.exe (for scenarios when internetaccess is unavailable)
    1. WebpiCMd.exe /Offline /Products:WorkflowManagerRefresh /Path:c:\xss\workflowmanagerrefresh
    2. Webpicmd /install /products:workmanagerrefresh /xml:c:\xss\workflowmanagerrefresh\feeds\latest\webproductlist.xml
    3. On members on the farm (not hosting workflow manager)webpicmd /install /products:workflowclient /xml:c:\xss\workflowmanagerrefresh\feeds\latest\webproductlist.xml
  3. Run Workflowmanager config
  4. Register-workflow från sharepoint shell (ps)
    1. Register-SPWorkflowService -SPSite –WorkflowHostUri –AllowOAuthHttp –Force -ScopeName SharePoint
  5. Verify installation with Sharepoint designer, Workflow 2013 available.