Thoughts on Supporting Outlook for Mac 2016 from Microsoft Ignite @ Chicago, Illinois

Recently, I had attended Microsoft’s Inaugural Ignite tech conference @ Chicago, Illinois. One of the best parts of the experience was hearing first hand product changes from developers. Outside some of the branding of Outlook into Office 2016 for the Mac, Microsoft had mapped out their commitment to stability, enhancements, and regular updates. Those updates were getting specifically released every month or two from this point forward. Another noteworthy improvement was handling of attachments. Attachments were reintegrated into the client for saving directly into Office 365 or SharePoint with the same familiar icon. As an Exchange administrator, I had wondered, “How much email quota space an employee is going to save?” Probably quite a bit!


Of course, with the good had come the bad. Still off the table was lack of support for “recover deleted items” and “Import/Export of PSTs.” Though currently only Windows options, the product management team had verbally committed addressing these missing features in the near future (i.e. hopefully a year).

So, the big takeaway was developers reaching out to their customers. “How?” you might have asked. For my current environment, three areas had got my attention for connecting.

1. Where should you report or ask about a bug?

2. Where can you ask for an enhancements for future versions of Outlook for Mac?

3. Where can you ask a quick question relating to product support outside of a bug, fix, or enhancement (i.e. What are supported versions of Exchange for Outlook 2016 for Mac?)

On Twitter with ‘Outlook for Mac’

Apparently, the product management team had curated each of the above areas regularly. That openness for contact was far more refreshing than hearing about any feature parity or cloud strategy. For years, Mac development had seemed like an afterthought. Now, Microsoft had made in words and deeds the thought of supporting Mac for Outlook just a bit more exciting!

Read More:

Meet the new Outlook for Mac 2016 (

The Office 2016 Mac Preview is here! (

Microsoft Ignite Keynote, 2015 (


CENTOS 6.5 VMWare Guest VM’s network no longer works, “destination unreachable”

In the course of supporting a CENTOS 6.5 VM guest under VSphere 5.5, I had experienced three different scenarios of losing network connectivity. These scenarios had included a P2V conversion, network driver update (E1000 > VMXNET3), and full VM restore. In each instance, I had begun my adventure typing ping at command prompt interrogating the gateway, ultimately receiving the error “destination unreachable.” With each instance, I had gained a little bit more experience as Windows guy on UNIX. I had hoped summarizing those collective experiences in the steps below.

With respect to CENTOS 6.5 P2V conversion support by VMWare, this was officially untested as of July 2014. For more information on “Using VMware vCenter Converter Standalone to convert physical or virtual CentOS 5.x machines to vSphere 5.1 environments” go to SpiceWorks click here. The scope of these steps had expected an administrator already verifying VM Guest adapter being “enabled” and available. Additionally, I had thought to iterate having the proper VLANs configured in VMWare guest settings and any attached network equipment. So, if you had received “destination unreachable” when pinging your gateway begin with the below:


1. Login as root

2. Review active OS network connections, run the command, ip a


# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet scope host lo

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000

link/ether 00:50:56:b7:79:ef brd ff:ff:ff:ff:ff:ff

inet brd scope global eth0

inet6 fe80::250:56ff:feb7:79ef/64 scope link

valid_lft forever preferred_lft forever


3. Review network adapter IP configuration, go to /etc/sysconfig/network-scripts/


4. List files in the directory with ls command. The listed interface files should match up with the results of #2.





5. If an extra or odd numbered interface exist, say ifcfg-eth1, I had recommended making a copy of the file (i.e. cp ifcfg-eth0 backup.ifcfg-eth0).

6. Review the network interface configuration in VI with the command (i.e. vi ifcfg-eth1, vi ifcfg-eth0, etc).

Example of output:








7. To save the file without changes, type :q


8. To make changes, I had recommended going to CSU Basic vi Commands Page to learn the following commands:

for insert

for save

9. A  file that requires changes, focus on two things;

a. the filename (i.e. cp ifcfg-eth6 ifcfg-eth0).

b. Ensure the value DEVICE= matches the necessary interface (i.e. eth0 vs. eth6)

10. After making the necessary changes, restart network server, service network restart.


11. If you were still unable to ping the gateway, perform route print, route

Example A:
Destination             Gateway    GenMask  Iface default            UG             eth0

Example B:
Destination             Gateway   GenMask   Iface default         UG              eth6

In Example B, the wrong interface is configured. To correct, run the following snippet from/sbin/ route add -net gw eth0



12. If ping had continued to fail to gateway, verify Network Manager service status.

13. Verifying Network Manager running on any interfaces, #chkconfig –list Network Manager


Example of output: NetworkManager 0:off   1:off   2:off   3:off   4:off   5:off   6:off

a. To stop NetworkManager service, service NetworkManager sto

b. To disable NetworkManager from running, chkconfig NetworkManageroff


D. DELETING 70-persistent-net.rules file

14. If network interface and default gateway configuration look good, go to etc/udev/rules.d


15. Locate the 70-persistent-net.rules file, this file contains adapter mac addresses and aliases

Example of content:

# This file was automatically generated by the /lib/udev/write_net_rules

# program, run by the persistent-net-generator.rules rules file.


# You can modify it, as long as you keep each rule on a single

# line, and change only the value of the NAME= key.

# PCI device 0x8086:0x100f (e1000)

SUBSYSTEM==”net”, ACTION==”add”, DRIVERS==”?*”, ATTR{address}==”00:50:56:b7:79:ef”, ATTR{type}==”1″, KERNEL==”eth*”, NAME=”eth0″

# PCI device 0x15ad:0x07b0 (vmxnet3)

SUBSYSTEM==”net”, ACTION==”add”, DRIVERS==”?*”, ATTR{address}==”00:50:56:b7:48:89″, ATTR{type}==”1″, KERNEL==”eth*”, NAME=”eth1″


16. Delete this file, when prompted type, Y for YES.



17. Run reboot command. The file will be recreated after restart (i.e. reboot command).

18. After restart repeat sections A and B to ensure network configuration file and route were properly updated.

Read More:

CentOS 6.3 Device eth0 does not seem to be present (Minimal non-cloned setup) (

Disabling Network Manager (

Using VMware vCenter Converter Standalone to convert physical or virtual CentOS 5.x machines to vSphere 5.1 (

Basic vi Commands (



Updating BIOS and Vmware View Agent on HP t5545 Thin Client

Few weeks ago at work, I was tasked with updating the BIOS and Vmware View Agent on out of warranty HP’s t5545 clients. The purpose was to support a newer release of VMView. The catch was being comfortable with Unix and updating packages in a certain sequence. Rather than have one more IT person squirm, I had thought pay-it-forward with this extended set of instructions after some “trial and error” and reading HP documentation.

A. Verify BIOS and VIEW agent.

  1. Boot TERMINAL.
  2. Click HP START menu.
  4. NOTE: The default password: Administrator/administrator or root/root.
  5. Under GENERAL TAB.
  6. Verify release BIOS VERSION, (i.e. 786R5v.2.00 vs. 786R5 v2.02).
  8. Verify VMWARE-VIEW-CLIENT (i.e. 4.0.1-1 etc. vs. 4.6.0-366101-1).


  1. Click MANAGEMENT tab.
  2. Insert USB stick.
  3. Select HP THINPRO IMAGE, click NEXT.
  6. Click FINISH. Click YES to CONTINUE.
  7. TERMINAL will RESTART and backup. CLICK OK when done. CLICK CLOSE.
  9. NOTE: This makes the drive bootable version of Unix for T5545 client.

C. Configure USB Drive with Updates

  1. Download WES Add-On (Configure BIOS Settings) (International) sp49355.exe.
  2. Extract files to: Windows system: C:\SWSetup\SP49355\LFlash\
  3. Copy LFlash directory to USB drive.
  4. Download Thin Client Add-On – VMware Horizon View Client (International) sp52674.exe.
  5. Extract files to: C:\Program Files\Hewlett-Packard\HP ThinPro\Add-On\View46\VMVIW3\
  6. Copy VMVIW3 to USB drive.


  1. Place USB into drive restart system.
  2. When prompted to run YES, and confirm YES.
  3. Wait for the flashing yellow prompt to power cycle.
  4. Remove drive after restart.
  5. NOTE: Some terminals had required the HP’s DOS version of their OS to load the BIOS update.


  1. After restart logon to Admin Tools.
  2. Logon to TERMINAL.
  3. Plug drive into usb port on front.
  4. List drives: fdisk –l
  5. Mount USB Drive, example: mount /dev/sda1/
  6. CLI will return path in error message “mount: /dev/sdb1 already mounted or navigate to the /tmp/ path for usb drive (i.e. /tmp/tmpfs/media/K*)
  7. Change directory until you locate (i.e. cd VMVIW3)
  8. List directory contents: ls
  9. Run: fsunlock
  10. Run: dpkg -i *.deb
  11. Run: flock
  12. Restart System

Read More:
HP t5545 Thin Client Drivers (

upgrading beyond VMWare Horizon View Client 4.6 for t5545 thin client (



Fix for “The server to which the application is connected cannot impersonate the requested user due to insufficient permission” Cisco Unity Connection Server 9.1 with Exchange 2013

Received the following error message,“The server to which the application is connected cannot impersonate the requested user due to insufficient permission,” when validating a mailbox that pointed to different account (i.e. for delivering voicemail to

Here is the technical background for the scenario.

a) Coexistence for Exchange 2007 SP3and Exchange 2013 CU1.

b) Cisco Unity Connection Server version 9.1.2TT1.11900-2TT1.

c) PowerShell command new-ManagementRoleAssignment -Name:RoleName
-Role:ApplicationImpersonation -User:’Account’ applied to Exchange 2013 environment per Cisco article Configuring Cisco Unity Connection 9x and Microsoft Exchange for Unified Messaging.

d) Deleted and recreated the AD account and mailbox ( and migrated back and forth between Exchange 2007 and Exchange 2013, same result.

The whole error message reads as follows:

HTTP status=[200] Diagnostic=[Failed extract folder ID ” from response] Verb=[POST] url=[] request=[<?xml version=”1.0″ encoding=”utf-8″?> <soap:Envelope xmlns:xsi=”; xmlns:xsd=”; xmlns:soap=”; xmlns:t=””&gt; <soap:Header> <t:RequestServerVersion Version=”Exchange2007_SP1″/> <t:ExchangeImpersonation> <t:ConnectingSID> <t:PrimarySmtpAddress> </t:PrimarySmtpAddress> </t:ConnectingSID> </t:ExchangeImpersonation> </soap:Header> <soap:Body> <GetFolder xmlns=”; xmlns:t=””&gt; <FolderShape> <t:BaseShape>Default</t:BaseShape> </FolderShape> <FolderIds> <t:DistinguishedFolderId Id=”deleteditems”> <t:Mailbox> <t:EmailAddress> </t:EmailAddress> </t:Mailbox> </t:DistinguishedFolderId> </FolderIds> </GetFolder> </soap:Body> </soap:Envelope> ] response=[<?xml version=”1.0″ encoding=”utf-8″?><s:Envelope xmlns:s=””><s:Header><t:ServerVersionInfo MajorVersion=”8″ MinorVersion=”3″ MajorBuildNumber=”342″ MinorBuildNumber=”0″ Version=”Exchange2007_SP1″ xmlns:t=””/></s:Header><s:Body><soap:Fault xmlns:soap=””><faultcode>soap:Client</faultcode><faultstring&gt;The server to which the application is connected cannot impersonate the requested user due to insufficient permission.</faultstring><detail><e:ResponseCode xmlns:e=””&gt;ErrorImpersonationDenied</e:ResponseCode><e:Message xmlns:e=””>The server to which the application is connected cannot impersonate the requested user due to insufficient permission.</e:Message></detail></soap:Fault></s:Body></s:Envelope>]

To fix this issue perform the following steps:

1. Logon to Cisco Unity Connection Server.


2. Select Users and Find target account (i.e. 7777).

3. Verify SMTP address matches the account (i.e.

4. Under LDAP Integration Status, select DO NOT INTERGRATE with LDAP DIRECTORY.

5. Ensure LIST IN DIRECTORY is selected.

6. Click SAVE for changes.


8. Verify RELAY ADDRESS is set to the different mailbox (i.e.



11. Click SAVE, then TEST.

Successfully test results should look like the following:

Task Execution Results
Severity Issue Recommendation Details
The validation results for the user unified messaging service account with service GFX-Exchange-2013 are the following: Service “GFX-Exchange-2013”: AuthenticationMode=NTLM [use HTTPS/no-validate] Server=[] Type=[Exchange 2007/2010] Username=[gfx\gfx_svc2013]
Mailbox was successfully accessed. Connected to using EWS.

Read more:
Configuring Cisco Unity Connection 9x and Microsoft Exchange for Unified Messaging (CISCO)


Scheduling Exchange mailbox moves in bulk with the Get-Mailbox -Database powershell command

Over the years, I had opted to skip database maintenance with moving mailboxes to new databases. Back in the old days, I used to copy mailbox after individual mailbox; however, with powershell and Exchange 2007 and above, you can schedule whole database worth of mailboxes at a time and multiples thereafter.

1. An Exchange server or system running Exchange Management Tools 

2.  Create an account with Organizational Admin permissions and Domain Admin permissions (i.e. powershell@garzafx.lcl)

3. Open powershell and run Set-ExecutionPolicy RemoteSigned to allow running your own scripts.

4. Create a folder labeled c:\scripts to store your scripts.

5.  Copy and paste the below into script file into a file labeled mailboxmoves.ps1. Modify database and server group as necessary.

#Adding Exchange Snap In to execute Exchange CmdLets in this scriptAdd-PSSnapin

Add-PSSnapin Microsoft.Exchange.Management.PowerShell.Admin


Get-Mailbox -Database “garzafx\A2013” | Move-Mailbox -BadItemLimit ’10’-TargetDatabase “garzafx\ A2014″  -Confirm:$False

Get-Mailbox -Database ” garzafx\B2013″ | Move-Mailbox -BadItemLimit ’10’-TargetDatabase ” garzafx\B2014″  -Confirm:$False


6. Ensure you have adequate log space on your system and free space on your target drives.

7. Create a scheduled task labeled: mailboxmoves.

8. Run under service account (i.e. powershell@garzafx.lcl).

9. Select option to,”Run whether user is logged on or not.”

10. Choose, “Run with highest privileges enabled.”

11. Select trigger options (i.e. every day, 9PM) and verify status, enabled.

12. Under Actions tab, select,”Start a Program.”

13. For Program/script enter: C:\Windows\system32\WindowsPowerShell\v1.0\powershell.exe

14. Under Add-arguments,”-file -Command “& ‘C:\scripts\mailboxmoves.ps1′”

15. Click OK and enter credentials when prompted.

16.  After the scheduled task runtime, verify no mailboxes reside in your old databases (i.e. get-mailboxstatistics -database myexchangeserver\agarzafx2013). Then proceed with file and folder deletion.

NOTE: You can schedule as many database mailbox moves, just be mindful of time of day and storage, otherwise you might inadvertently take information stores offline.

Read More:

How To: Schedule PowerShell Script for an Exchange Task  (Exchange Server Share)


Office 365 configuring a user to not expire via powershell

For those mentally transposing commands from Active Directory and Exchange via PowerShell, I had thought this might be useful in avoiding some pitfalls in your daily Azure admin tasks.

Before we had started, double-check the following pre-requisites for running Windows Azure Active Directory Module for Windows PowerShell.

a. Review software requirements.

b. Install the Windows Azure AD Module for Windows PowerShell.

So let us begin!

1. Under Microsoft Online Services, run as administrator the Microsoft Online Services Module for Windows PowerShell.

2. Enter; Connect-msolservice

3. When prompted, enter login credentials for an Office 365 administrative account.

4. Locate the User Principal Name for the account for editing.

NOTE: If unsure, export your entire list of users with the following command: Get-MSOLUser -All | export-csv c:\getusers.csv.

5. Run the following command, Set-MsolUserPassword to set a password. Reference the following example:

Set-MsolUserPassword -UserPrincipalName -ForceChangePassword $False -NewPassword “Xenomorph#1”

6.  Setting user password not to expire, Get-MSOLUser | SetMsoluser. Reference the following example:

Get-MSOLUser -UserPrincipalName | Set-MsolUser -PasswordNeverExpires $true

The key variable that had needed qualifying –UserPrincipalName. In Exchange and Active Directory, I had left  this open ended or closed with the –identity switch in PowerShell. Also, I had gotten tripped up with Boolean data for reference for –ForceChangePassword $False instead of “False” or False.

If you hadn’t typed the commands properly, expect some of the follow automated errors:

Get-MsolUser : A positional parameter cannot be found that accepts argument

Set-MsolUserPassword : Cannot convert ‘System.String’ to the type ‘System.Nullable`1[System.Boolean]’ required by parameter ‘ForceChangePassword’.

Read More:
Set-MsolUserPassword (Microsoft TechNet)
Get-MsolUser (Microsoft TechNet)
Set up user passwords to never expire (Office365)


Automating export of Exchange mailboxes and deletion of Active Directory User Accounts

Automating export of Exchange mailboxes and deletion of Active Directory User Accounts

In a march forward to process refinement, I had wanted to automate a manual process. The process had started at the end of employee termination after an AD account becomes disabled. With that in mind, I had outlined the objectives for the automated scheduled task.

  1. Export Exchange mailboxes from an OU to PST.
  2. Export a list of users from an OU with the last logon date of a 30 day interval.
  3. Delete a list of users from an OU with the last logon date of a 30 day interval.


  • A Windows 7 system added to your domain (i.e. garzafx.lcl)
  • Install  Microsoft Office 2010 or Office 2013.
  • Exclude Windows 7 system from automatic windows updates. The reason for the exclusion, periodically an Office update breaks the export process from client export from Exchange.
  • Install Exchange Management Tools matching the version number on the Exchange server.
  • Download and install “Remote Server Administration Tools” on Win 7 [KB958830:]
  1. Create a folder named C:\psts.
  2. Create a powershell file labeled export-mail.ps1 inside C:\psts
  3. Copy and paste the below, then into export-mail.ps1

#Adding Exchange Snap In to execute Exchange CmdLets in this script

Add-PSSnapin Microsoft.Exchange.Management.PowerShell.Admin

# Get mailboxes for disabled users, add rights for service account

Get-mailbox -OrganizationalUnit “Disabled Users” | Add-MailboxPermission -User “garza\powershell” -AccessRight FullAccess -InheritanceType all

# Export mailboxes with no confirmation to directory setting a limit for corrupt items to 10,000

Get-mailbox –OrganizationalUnit “Disabled Users” | Export-Mailbox –PSTFolderPath “C:\PSTs\” -Confirm:$false -BadItemLimit 10000

# Enable Active Directory Powershell then, get AD Users over 30 days old from Disabled Users OU, export to file with date stamp, then delete

Import-Module ActiveDirectory

get-aduser -Searchbase “OU=Disabled Users,DC=lkm,DC=sungardps,DC=lcl” -filter * | where { $_.lastLogonDate -lt (get-date).adddays(-30) } | export-csv c:\psts\delete_ad_users_lkm_$((Get-Date).ToString(‘MM-dd-yyyy_hh-mm’)).csv

get-aduser -Searchbase “OU=Disabled Users,DC=lkm,DC=sungardps,DC=lcl” -filter * | where { $_.lastLogonDate -lt (get-date).adddays(-30) } | remove-aduser

4. Create a service account with the appropriate rights for powershell@garzafx.lcl (i.e. Domain Admins and Exchange Organizational Admins).

5. Create or relabel an OU for DISABLED USERS in the root of Active Directory. This can be whatever you want, just modify the script as necessary.

6. Create a scheduled task labeled: export-mail.

7. Run under service account (i.e. powershell@garzafx.lcl).

8. Select option to,”Run whether user is logged on or not.”

9. Choose, “Run with highest privileges enabled.”

10. Select trigger options (i.e. every day, 7am) and verify status, enabled.

11. Under Actions tab, select,”Start a Program.”

12. For Program/script enter: C:\Windows\system32\WindowsPowerShell\v1.0\powershell.exe

13. Under Add-arguments,”-file -Command “& ‘C:\psts\export-mail.ps1′”

14. Click OK and enter credentials when prompted.

The key objectives for the task were to provide daily routine to export mailboxes, write a copy of the expiring accounts to csv and then delete 30 day old accounts. The process had served as a primer for other tasks. The main change here was adding the Exchange permissions and the time stamp on the daily csv file.


Cleaning up Active Directory Computers with Powershell

For better management of Active Directory computer objects across two domains, I had configured a scheduled Windows task to perform clean up of computer objects from a Windows 7 VM. Specifically, I had wanted to disable and delete computer accounts after moving them in a specific OU. Note, this had previously worked on Windows 2008 R1; however, the script eventually had stopped processing because of an access denied error message, Windows Powershell issue with move-adoobject access denied, KB article 2806748. I had attempted running the script with different versions and languages of powershell but, to no available. I had ultimately updated the problem domain to Windows 2008 R2.

This script below had continued to work on Windows 2008 R2 and above plus Windows 2003 with Active Directory Management Gateway Service. Modify the below directions for your environment.

1. Create a folder named C:\scripts.

2. Create a powershell file labeled movecomputers.ps1 inside C:\scripts

3. Copy and paste the below and save into movecomputer.ps1

# a. Get Computers on Active Directory Domain: and move to Disabled Computers OU over 60 days old

get-adcomputer -properties lastLogonDate -filter * | where { $_.lastLogonDate -lt (get-date).adddays(-60) } | Move-ADObject -TargetPath “OU=Disabled Computers,DC=garzafx,DC=com” -Confirm:$false -Verbose

# b. Get Computers over 60 days old on Domain:  and DISABLE

get-adcomputer -properties lastLogonDate -filter * | where { $_.lastLogonDate -lt (get-date).adddays(-60) } | Disable-ADAccount

# c. Get Computers over 70 days old on Domain:  and DELETE and export to file

Search-ADAccount -AccountDisabled -Searchbase “OU=Disabled Computers,DC=garzafx,DC=com” -ComputersOnly | where { $_.lastLogonDate -lt (get-date).adddays(-75) } | Remove-ADObject -Recursive -Confirm:$False –Verbose | export-csv c:\scripts\

4. Create a service account with domain admin rights (i.e.

5. Create an OU labeled DISABLED COMPUTER in the root of Active Directory.

6. Create a scheduled task labeled adcomputer.cleanup.

7. Run under service account (i.e.

8. Select option to,”Run whether user is logged on or not.”

9. Choose, “Run with highest privileges enabled.”

10. Select trigger options (i.e. every day, 7am) and verify status, enabled.

11. Under Actions tab, select,”Start a Program.”

12. For Program/script enter: C:\Windows\system32\WindowsPowerShell\v1.0\powershell.exe

13. Under Add-arguments,”-file C:\scripts\.\movecomputers.ps1


14.  Click OK and enter credentials as necessary.

powershell @

Read More:

Windows Powershell issue with move-adoobject access denied, KB article 2806748 (Microsoft)

Active Directory Management Gateway Service, Active Directory Web Service for Windows Server 2003 (Microsoft)

Resolving computer object replication conflicts (

Having an issue activating mobile device under Exchange Activesync? Clear ActiveSyncAllowedDeviceIDs!

From time to time, as IT administrator, you had recalled a problem with an employee name, for our purposes, Ellen Ripley or if you prefer, xenomorph. The most recent “ah moment,” had derived from migrating an iPhone to a newer mobile device management product. Since this issue had come around twice over a half decade but, vexing enough to remember, figure worth mentioning.

The symptoms: Windows credentials won’t authenticate under Activesync but, testing against another active Exchange mailbox account, works fine.

Solution: Removing all Active sync devices from a user profile aka “Clear ActiveSyncAllowedDeviceIDs”

1. Launch Exchange Management Shell

2. Execute the following command example;

set-CASMailbox Ellen.Ripley -ActiveSyncAllowedDeviceIDs $null


Office 365 & ADFS: Compiler Error Message: CS0101: The namespace ‘Resources’ already contains a definition for ‘CommonResources’

Over the past month, I had participated in bringing Office 365 to our corporate environment. Working with respective team mates, all that had remained some login customizations and an email announcement. Reviewing a previously referenced Microsoft technet article, I had thought this should be a piece of cake.

Going through one of three ADFS proxy servers, I had started going through the customization directions from Customizing the ADFS forms based login page. The first couple of files, I had updated per article. Before making any changes, I had made a copy of each file, just in case of human error. Moving onto the directory C:\inetpub\adfs\ls\App_GlobalResources, I had repeated the same steps on the file, CommonResources.en.resx.

After testing, I had received a generic .net error. Having a long day, I had concluded a syntax error somewhere. More often than I had liked to admit, an errant space or another syntax faux paux, usually the source of a .net error. After some proof reading, nothing had seemed out of sorts. In the interest of time, I had rolled back the original file by updating the file name.


However, much to my chagrin, I had still received the generic .net error. After some reflection, I had started to look at file dates and alternate servers. After some frustration, I had worked my way inside out. Beginning with an internal ADFS server, I had copied the same custom files over. At this point, I had received the error message below. I had looked at line 25. Nothing had appeared incorrect. Going back to the error, I had deduced maybe having both files the cause of the message. I had copied the extra files to my server desktop. Testing again on the ADFS server, the error had cleared. I had repeated the same method on the ADFS proxy server with similar success. So with much jubilation, I had completed the customizations today and promptly exited the building.

.net Error message:

Server Error in ‘/adfs/ls’ Application.
Compilation Error
Description: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately.

Compiler Error Message: CS0101: The namespace ‘Resources’ already contains a definition for ‘CommonResources’

Source Error:

[No relevant source lines]

Source File: c:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\adfs_ls\a6c27ef8\7adb2961\App_GlobalResources.h4pqoe46.2.cs Line: 25

Show Detailed Compiler Output:

c:\windows\system32\inetsrv> “C:\Windows\Microsoft.NET\Framework64\v2.0.50727\csc.exe” /t:library /utf8output /R:”C:\Windows\assembly\GAC_MSIL\System.Runtime.Serialization\\System.Runtime.Serialization.dll” /R:”C:\Windows\assembly\GAC_MSIL\System.IdentityModel\\System.IdentityModel.dll” /R:”C:\Windows\assembly\GAC_MSIL\System.Web.Extensions\\System.Web.Extensions.dll” /R:”C:\Windows\assembly\GAC_MSIL\Microsoft.IdentityModel\\Microsoft.IdentityModel.dll” /R:”C:\Windows\assembly\GAC_MSIL\System.Web.Services\\System.Web.Services.dll” /R:”C:\Windows\assembly\GAC_64\System.Web\\System.Web.dll” /R:”C:\Windows\assembly\GAC_MSIL\System.Web.Mobile\\System.Web.Mobile.dll” /R:”C:\Windows\assembly\GAC_64\System.Data\\System.Data.dll” /R:”C:\Windows\assembly\GAC_MSIL\System.Data.DataSetExtensions\\System.Data.DataSetExtensions.dll” /R:”C:\Windows\assembly\GAC_MSIL\System.WorkflowServices\\System.WorkflowServices.dll” /R:”C:\Windows\assembly\GAC_MSIL\System.Drawing\\System.Drawing.dll” /R:”C:\Windows\assembly\GAC_MSIL\Microsoft.IdentityServer\\Microsoft.IdentityServer.dll” /R:”C:\Windows\assembly\GAC_64\System.EnterpriseServices\\System.EnterpriseServices.dll” /R:”C:\Windows\assembly\GAC_MSIL\System.Xml.Linq\\System.Xml.Linq.dll” /R:”C:\Windows\assembly\GAC_MSIL\System\\System.dll” /R:”C:\Windows\Microsoft.NET\Framework64\v2.0.50727\mscorlib.dll” /R:”C:\Windows\assembly\GAC_MSIL\System.ServiceModel\\System.ServiceModel.dll” /R:”C:\Windows\assembly\GAC_MSIL\System.Configuration\\System.Configuration.dll” /R:”C:\Windows\assembly\GAC_MSIL\System.Core\\System.Core.dll” /R:”C:\Windows\assembly\GAC_MSIL\System.Xml\\System.Xml.dll” /R:”C:\Windows\assembly\GAC_MSIL\System.ServiceModel.Web\\System.ServiceModel.Web.dll” /out:”C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\adfs_ls\a6c27ef8\7adb2961\App_GlobalResources.h4pqoe46.dll” /debug- /optimize+ /res:”C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\adfs_ls\a6c27ef8\7adb2961\resources.commonresources.en___copy.resources” /res:”C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\adfs_ls\a6c27ef8\7adb2961\resources.commonresources.en.backup.resources” /res:”C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\adfs_ls\a6c27ef8\7adb2961\resources.commonresources.resources” /w:4 /nowarn:1659;1699;1701 “C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\adfs_ls\a6c27ef8\7adb2961\App_GlobalResources.h4pqoe46.0.cs” “C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\adfs_ls\a6c27ef8\7adb2961\App_GlobalResources.h4pqoe46.1.cs” “C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\adfs_ls\a6c27ef8\7adb2961\App_GlobalResources.h4pqoe46.2.cs” “C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\adfs_ls\a6c27ef8\7adb2961\App_GlobalResources.h4pqoe46.3.cs”

Microsoft (R) Visual C# 2005 Compiler version 8.00.50727.4927
for Microsoft (R) Windows (R) 2005 Framework version 2.0.50727
Copyright (C) Microsoft Corporation 2001-2005. All rights reserved.

c:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\adfs_ls\a6c27ef8\7adb2961\App_GlobalResources.h4pqoe46.2.cs(25,18): error CS0101: The namespace ‘Resources’ already contains a definition for ‘CommonResources’
c:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\adfs_ls\a6c27ef8\7adb2961\App_GlobalResources.h4pqoe46.0.cs(11,11): (Location of symbol related to previous error)

Version Information: Microsoft .NET Framework Version:2.0.50727.5472; ASP.NET Version:2.0.50727.5474

Read more:
Customizing the ADFS forms based login page (


5 more basic UNIX commands for troubleshooting for a Windows Guy

Recalling another blog this week, 5 CentOS commands for basic troubleshooting for a Windows guy, I had wanted to cover few more UNIX commands. These commands had come in handy recently and in the past.

1. w, for who is logged in to system? Simply type: w

Example of output:

USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT

root     pts/0    garzafx               14:05    0.00s  0.12s  0.00s w

Administering applications requiring UNIX user logons, this command had offered a quick view of the folks being displaced with service restarts or system restarts (i.e. Init 0,1,6 or reboot).

2. man, for help from the manual. If you had needed to get assistance on a command, simply remember, type man followed by what you are looking for! Syntax had followed this example;

man man or man ip a

Example of output:

[root@localhost ~]# man man

man(1)                                                                  man(1)


man – format and display the on-line manual pages


man  [-acdfFhkKtwW]  [–path]  [-m system] [-p string] [-C config_file]

[-M pathlist] [-P pager] [-B browser] [-H htmlpager] [-S  section_list]

[section] name …


man formats and displays the on-line manual pages.  If you specify sec-

tion, man only looks in that section of the manual.  name  is  normally

the  name of the manual page, which is typically the name of a command,

function, or file.  However, if name contains  a  slash  (/)  then  man

interprets  it  as a file specification, so that you can do man ./foo.5

or even man /cd/foo/bar.1.gz.

3. cp, for copying files. With UNIX configuration files of all sorts, this command had provided quick CYA measure worthy of mentioning. The syntax that UNIX had followed:

cp oldfile.txt newfile.txt

4. mv, for renaming files. Some applications had recreated configuration files on restart. Keeping it simple for this scenario, I had suggested a move in one command;

mv oldfile.txt newfile.txt

5. pf -ef | grep –I, for searching system wide. This command had required a lot of exposition maybe; however, in avoiding verbosity and any confusion, I had thought useful mentioning just the syntax. This piped command had come up multiple times in my travels from AIX to CentOS. Years ago, I had witnessed this command shopping for crash dump files (i.e. core files). The most recent occasion had occurred with hunting down NetworkManager service.

Example of output:

[root@localhost network-scripts]# ps -ef | grep -i network

root      1454     1  0 09:34 ?        00:00:00 NetworkManager –pidfile=/var/run/NetworkManager/

If you had remembered other commands worth mentioning, chime in. Hopefully with your next system administrative crisis with UNIX gremlins, this post had provided some guide to navigating the waters of cli.


Read More:

Download Centos 6.4 
CentOS (Wikipedia)

5 CentOS commands for basic troubleshooting for a Windows guy

Recently in configuring a third party application on CentOS, I had made some interesting notes in troubleshooting an issue on Unix. Before starting, I had understood navigating around some general commands like LS, LS –L, PWD, TOP on Solaris 9.x, ESX 5.x and Ubuntu 12.x. with a touch of VI. The work this week had helped transpose some logic in search of my CentOS Gremlin.

1. List running services at boot: chkconfig –list

In Microsoft Windows, many IT folks had launched msconfig or the registry to get a quick rundown of suspects to knife during boot. To get a list of those in CentOS, that command had been:

chkconfig –list

This command had generated a print screen of services being off or on. In my case, I had needed to stop a service.

2. Stop a service at boot: chkconfig service (on/off)

In Windows, some people had leveraged the GUI with the snap-in services.msc, unchecked a box in msconfig, or set value to 0 in the registry. Here the handy command had followed this syntax: chkconfig service on or chkconfig service off.


chkconfig NetworkManager off

NOTE: This disables network configuration service for desktops


chkconfig iptables off

NOTE: This disables the firewall

3. List running network configuration: ip a

In the M$ world, developers had made this accessible via the run command: or ipconfig /all. On the CentOS platform, to get a quick rundown of network interfaces, I had entered the following command:

ip a


[root@localhost network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:19:b9:b6:90:06 brd ff:ff:ff:ff:ff:ff
inet brd scope global eth0
inet6 fe80::219:b9ff:feb6:9006/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 00:19:b9:b6:90:08 brd ff:ff:ff:ff:ff:ff
4: eth1.0100@eth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noqueue state DOWN
link/ether 00:19:b9:b6:90:08 brd ff:ff:ff:ff:ff:ff

4. Print the routing table: netstat –rn

Sometimes in Windows, it had been important to understand the default gateway and destination for all traffic. In my case the last line entry had a misconfigured gateway.


[root@localhost network-scripts]# netstat –rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface   U         0 0          0 eth0     U         0 0          0 eth0     U         0 0          0 eth1         UG        0 0          0 eth0

5. Restarting a service: service ServiceName (stop/start/restart)

In my adventure, I had three different services requiring some manual control (i.e. NetworkManager, MySQL, network). The one exception with a different name had a different service name, MySQL with MySQLD.


service network restart

Shutting down loopback interface:                          [  OK  ]

Bringing up loopback interface:                            [  OK  ]

Bringing up interface eth0:                                [  OK  ]

Bringing up interface eth1:                                [  OK  ]

So with those commands in hand and some editing of some network configuration files, I had fixed the issue preventing an application from completing initialization. Hopefully, some of this explanation had illuminated possibilities on a starting point for working with CentOS. The more light one had saught, the less Gremlins you’ll see.


Read More:

Download Centos 6.4 
CentOS (Wikipedia)

RSA Authentication Manager 7.1 – Cannot add or manage user. The specified ID is in use by an unresolvable user

Every now and then, working with RSA, I had come across something new. One of those had happened to be a specific error message,

“Cannot add or manage user. The specified ID is in use by an unresolvable user. User IDs must be unique within an identity sourceCannot add or manage user. The specified ID is in use by an unresolvable user. User IDs must be unique within an identity source”

This had meant a previous removed account (i.e. Active Directory integrated realm) requiring removal from RSA.
The first time I had experienced this when running out of tokens to reassign. The second instance had occurred with an employee returning to an organization.


To fix this, I had performed the following functions

1. Log on RSA Security Console.

2.  Select SETUP tab.

3.  Select Identity Sources.


5. Review, the number of days. The default had been 7.

6.  Click NEXT. RSA will generate a preview of accounts to process and then complete.


7. Re-run CLEAN UP with appropriate number of days to ensure you get the desired result. In my case I had selected 1 day.

8. Re-running the function, I had received the following, NO UNRESOLVABLE USERS WERE FOUND.



1. Log on RSA Security Console.

2.  Select SETUP tab.

3.  Select Identity Sources.


5. Configure the appropriate options. Click SAVE.



Adding secure connectivity for remote administration of CentOS with NX Server

Having been part of IT, there had always been adventure around the next key stroke, call or email. This go around, I had wanted to take a few minutes to document some steps in getting remote connectivity to work on a flavor of Unix, CentOS. Now let me contrast this by saying, Unix’s strong suit had always been the level of customization. The question had been knowing which pieces to squeeze together for certain administrative task. On that note, here had been the steps for getting remote connectivity setup up for an install of NXServer on CentOS below.


Before proceeding, a target system had required an install of  CentOS (6.x) with all the default settings. I had performed the following steps on server and Windows 7 desktop.

  1. Logged on to CentOS server locally.
  2. Elevated permissions in terminal by typing, su
  3. Entered password  (i.e. root password, etc..)
  4. Entered, yum install nx freenx
  5. Allowed the installation to complete.
  6. Switched directories with the following command, cd /etc/nxserver
  7. Modified configuration file with VI by entering in the following command, Vi node.conf
  8. Started data insert with vi by typing, I
  9. Located the following variable and set to 1, ENABLE_PASSDB_AUTHENTICATION=”1″
  10. Pressed the ESC key.
  11. Saved the file the follow key combination, :q 
  12. Added user enter to nx server with, nxserver –adduser packetfence
  13. Generated a client key type, cat /var/lib/nxserver/home/.ssh/client.id_dsa.key
  14. Copied the results into notepad for use on your desktop, Windows 7
  15. Launched Windows NX client on Windows.
  16. nx2
  17. Modified NX client from dialog box, clicking CONFIGURE.
  18. Clicked the GENERAL tab.
  19. Selected KEY.
  20. Pasted the key from #12 into the field.
  21. Selected KDE for session for desktop
  22. kde
  23. Saved changes and attempted to logon. It failed.
  24. Went to CentOS server, entered,  service sshd restart
  25. Then followed-up with the next command, chkconfig sshd on
  26. Important, chkconfig sshd on command ensuresd that the SSH service started on boot.
  27. Relaunched the NX client and attempt to connect.This time it connected successfully.
  28. Rebooted server and verified remote login again.


Read more:

PowerShell – Search against large .csv file(s) for a string

Previously, I had exported log files of considerable size parsing for certain columns of data to multiple .csv files.  Now I had wanted a response file with corresponding variable(s) and each entry (i.e. time stamp etc.) searching against one or multiple files. Below had been the process for this search and export. In this example, I had been using Windows 2012.

1. Right-click on the PowerShell icon and select RUN AS ADMINISTRATOR and OPEN.



2. Determine PowerShell version with the following command: $host.version

IMPORTANT: PowerShell 2.0 and above must be used to support the forthcoming commands.


3. Change directory to .csv file(s) location  (i.e. c:\logs)


CD C:\logs

4. Create a directory to place your exported results (i.e. c:\garzafx)

5. Run the following command to export search to .csv file

Get-ChildItem | Get-Content | Select-String -pattern “ellen ripley” | export-csv c:\garzafx\ellen.ripley.csv

a. Get-ChildItem will reference the file or files within the current directory. For my purposes here and the size of the data files, I had referenced few at a time.

b. Get-Content, simply had taken what Get-ChildItem pulled from the aforementioned directory (i.e. C:\logs)

C. Select-string -pattern, This had provided the variable (i.e. ellen ripley)

6. Here had been the export results with time stamp and pattern match for Ellen Ripley.


More examples and information on these PowerShell commands:

Get-ChildItem (Microsoft Technet)

Get-Content (Microsoft Technet)

Select-String (Microsoft Technet)

To Get PowerShell 3.0

Download Powershell 3.0

PowerShell, Get-Winevent returns a blank column for ‘Message’ property

As part of process to verify event logging across a Windows Infrastructure, came across an odd Powershell query issue. I had hoped to leverage PowerShell to mass convert a folder worth of archived logs (i.e. 1GB/file) into one or more .csv file(s). After fifteen minutes, I had composed a simple query for the conversion:

Get-Winevent -path “c:\logs\*.evtx” | select-object -property TimeCreated, Message | export-csv “C:\garzafx\logs.csv”

Get windows logs from this folder, get “time created” plus “message body”, and export to .csv file to view in Excel.

Initial conversions had been successful. I had scheduled to run the full set of files a few days later. Revisiting this process, the message field had stopped providing data. It had started to return blank column. So between forums I had executed the same query with same log files in the following scenarios:

1. Loaded Windows 2008 R2 and associated patches.

2. Upgraded to Windows Server 2012 from 2008 R2.

3. Clean install of Windows Server 2012, no windows updates.

4. Freshly reimaged system running Windows 7 from Service Desk.

5. Upgraded to Powershell 4.0 beta.

6. Switched to PowerShell –Version 2 switch.

7. Applied US English language 3rd party PowerShell custom script with my query nested.

All actions had resulted in the same blank column for ‘Message’ property. Saving out logs manually had given me the following prompt and hope.


However, the originating server had provided the same empty results. Reading a few more blogs, I had read up again on Log Parser Studio 2.0. This application had provided something new, a mix of SQL and Powershell commands.

8. After launching LPS 2.0,  I had selected the folder for my logs (i.e. C:\logs) with this icon  20130729-221138.jpg from the following window.


9.   Updated the SELECT statement to use STRING for message property with TOP 10 statement provided by LPS 2.0. The following was successfully returned for ‘Message’ property.


10. After dropping the TOP 10 reference,  I had pressed the following 20130729-221129.jpg button to export the T-SQL commands to get a PowerShell script.


11. After LPS 2.0 exporting myscripts.ps1, I had executed Set-ExecutionPolicy RemoteSigned at prompt.  Finally, I had been able to  convert .evtx files into .csv files with the ‘Message’ column full. Below was example of the original script. In the end, Log Parser Studio 2.0 was a great find for those looking for another means of generating PowerShell scripts. As for the original query not working, some other bloggers have pushed the notion of a software bug.

Example of output:

Install Log Parser Studio


# Generated by Log Parser Studio #



#Log Type: EVTLOG

#Generated: 7/25/2013 7:14:54 AM


























Write-Host“You have chosen”$OutFileType“as the output, but this script was originally generated as”$OriginalFileType-ForegroundColorRed

Write-Host“Either change -OutFile to”$OriginalFileType“or generate the script again with the output as”$OutFileType-ForegroundColorRed

Write-Host“You can also modify the OutputFormat variable in this script to match the correct Log Parser 2.2 COM output format.”-ForegroundColorRed







































$OutputFormat.oTsFormat=“yyyy-MM-dd hh:mm:ss”





-Activity“Executing query, please wait…”-Status” “


=“SELECT TimeGenerated,Strings INTO ‘”+$Destination+“‘ FROM ‘C:\garzafx\myeventlog.evtx'”


















Write-Host$_.Exception.Message -ForegroundColorRed

Write-Host$_.Exception.GetType().FullName -ForegroundColorRed

Write-Host“NOTE: No output file will be created if the query returned zero records!”-ForegroundColorGray



For more information on:

a. Log Parser Studio 2.0

b. Get-Winevent  (TechNet)

c. select-object  (TechNet)

d. Set-ExecutionPolicy RemoteSigned (TechNet)

PowerShell, cleanup full access permissions on Microsoft Exchange mailboxes

Periodically, it has been necessary to cleanup mailbox permissions across a Microsoft Exchange server or Exchange organization. Sometimes other IT administrators, myself included, have forgotten to remove self-applied permissions in the heat of providing employee support.  Two tasks that have helped, an inventory of existing full access and selective bulk removal.

A. Inventory all the mailboxes with accounts with full permissions.

This will provide an export in CSV format to sort against for review.

1. Launch Exchange Management Shell as administrator with appropriate Exchange Organizational or Exchanger Server permissions.

2. Create localized folder for exports (i.e. C:\garzafx\).

3.  Export all full mailbox permissions to mailbox to Excel CSV file as follows:

Get-Mailbox -Server “myemailserver” | Get-MailboxPermission | export-csv c:\garzafx\

NOTE: If you haven’t already created your own folder for exports on your system, please do so to avoid any inadvertent errors.

B. Selective account removal

Now that you have your variables to search against, you can create a get-contents script or just keep it simple with the following:

4. Get-Mailbox | Remove-MailboxPermission -AccessRights FullAccess -user “weyland\ellen.ripley”

IMPORTANT: For Send-As permissions will have to had employed Get-AdPermission


More Information On Exchange:

More on PowerShell: