How To Load A Custom Function In PowerShell

OR ‚Äď How To Set Security Permissions To Run Other People’s PowerShell Scripts

I had to configure a custom PowerShell function on my clustered file server to try to fix a permissions issue I ran into after one of my SANs failed. Google was nice enough to point me here http://learn-powershell.net/2014/06/24/changing-ownership-of-file-or-folder-using-powershell/ where I found a solution to my problem. When you’re in crisis mode, it can sometimes take you a minute to remember to set security correctly to run a downloaded .ps1. So, since I rarely forget to do something I blog, this is more for me than for youūüėČ

On the bottom of Boe’s blog, there is a download link that sends you to the TechNet Gallery to download the .ps1 file.¬† For the sake of simplicity I downloaded it to C:\Users\mrichardson\Set-Owner.ps1 since that is the root folder PowerShell opens in. If this is a function (or script) you anticipate using frequently, a better location would probably be your Modules folder.¬† Browse to the function/script you downloaded, right click on the file, go to Properties, then on the General Tab, and click the “Unblock” button at the bottom.

Unblock POSH

Next, open PowerShell with Administrator Privileges and set the Execution Policy to “Remote Signed”.¬† This will allow you to run a script or function from a different system.¬† You set the Execution Policy by running the following PowerShell command:

Set-ExecutionPolicy remotesigned

Now you can load the function by running the following command (substituting the appropriate file name).

. ./Set-Owner.ps1

You should now be able to use the function as per its article. Happy scripting!


How To Add Additional Users to your AWS Account Using IAM

 

  1. Log into AWS and go to the IAM Service:

  2. If this is the first user you are adding to this particular AWS account, click “Create New Group of Users”. If this is not the first user added to this account, skip to step 4.

  3. Name your group with an appropriate name. In the example below, I chose “AccountAdmins”. Keep in mind no spaces are allowed.

 

  1. On the next pane, choose the appropriate level of access from the template list. We don’t recommend straying outside the policy list as that level of granularity significantly increases administrative management of the groups.

 

  1. Accept the default permissions and click continue on the Edit Permissions screen.

 

  1. Enter the name of the user you wish to create and add to this group and click continue. Leave the “Generate an access key for each User” box checked.

 

  1. On the Next Screen, click Finish

  2. The wizard will now give you the opportunity to download the user credentials for the user you created in step 6. Download these credentials and name the appropriately. YOU WILL NOT GET ANOTHER OPPORTUNITY TO DOWNLOAD THESE USER CREDENTIALS!

 

  1. This will put you back on the “Getting Started” page for IAS. On this page, click the link for the user to edit the user. Note the AWS Account Alias section. You will need the url listed to log in with the credentials you’re creating for this account.

 

  1. Click the tick box next to the user you created in step 6 and then click the “Manage Password” Button.

  2. On the “Manage Password” Window that appears, click the radio button next to “Assign a cutom password” and enter in the assigned password for the user you’re adding and click apply.

  3. You’re all done. Provide the user ID, password, login url and the file you downloaded in step 8 to the individual that will be using this account and they should now have the assigned level of access.

Using PowerShell to Manage Distribution Groups in Exchange 2007

This is a quick post for a small task I found the basis for the commands in Ying Li’s post here.

We had an admin leave us and go to Facebook recently.  She was a member of a TON of our distribution groups set up for all of our Amazon Web Services account.  Well, I couldn’t go into ECM and remove the user from the groups, and there were a bunch of them, so I really didn’t want to click each individual group and remove her.  So, I did a quick google search and strung a command together remove the user from the groups.

All of our AWS accounts start with AWS, i.e. AWS-ClientName@company.com.  So, this is what I came up with:

Get-DistributionGroup "AWS*" | Remove-DistributionGroupMember -member oldadmin

That worked like a charm.  So, I then ran this command to add myself to those same groups:

Get-DistributionGroup "AWS*" | Add-DistributionGroupMember -member mrichardson

That too worked like a charm.  I had some other cleaning up to do, so I encorporated a couple other commands to remove the old admin from all groups.  That required two different commands:

Get-DistributionGroup "*" | Remove-DistributionGroupMember -member oldadmin
Get-SecurityGroup "*" | Remove-SecurityGroupMember -member oldadmin

Of course I got errors for the groups she was not a member of, but that was to be expected. That pretty much sums it up.  Hope this is helpful for someone.


Using PowerCLI to Auto-Update VMWare Tools

If you’ve ever read my blog you already know I like to ramble, so if you’d like to get to the nitty gritty of this script, skip down to the “Rubber, meet Road” section.

First of all, excuse my ‘noobness’ as I recently started supporting a VMWare server cluster, and sometimes those of us that just start working with a proven technology, find things and go‚Ķ “Holy Crap! You can do that?” A while ago, we upgraded our VCenter to version 4.something-or-other. After that upgrade, you have to upgrade the VMWare Tools. Well, while I was attending one of the local VMUG meetings, someone mentioned that you can do the upgrade and suppress the reboot via command line. So, once I got back to the shop, I used this simple command:

Get-VM¬†vmname | Update-Tools¬†‚ÄďNoReboot

That was handy. All guests upgraded, no reboots, upgrade will complete during reboot of next patching cycle. Marvelous! Then, later I was digging around and found this setting:

I said to myself‚Ķ. you guessed it, “Holy Crap! You can do that?” So, I started flipping through all of our Windows guests and turning that on, making it so I didn’t have to worry about it ever again. Then came the arrival of VCenter 5.0. We diligently upgraded our VCenter and hosts shortly before a patch cycle. After the patch cycle, all of my servers were still reporting that their VMware tools were out of date. Scratching my head, I double checked, and the previous settings for the auto-update during power cycle had been reset to default which is unchecked. UGH! The first time around, I thought I’d only have to do it once, so I changed the setting manually for my 100+ servers. This time, I decided to write a script, especially since the upgrade had changed my settings back to default, so there was a good chance this would happen again. Not only that, I can set this script to run automatically before each patch cycle so any new VMs will get set as well.

Of course, I started out with our friend Google. I found a pretty darn good post by Damian Karlson where I got most of my script and a few other links from that page for some more information. What I didn’t find was a “Here is what you have to know if you’re a noob” post, so I had to do some figuring out on my own.

Rubber, meet Road

First of all, I use PowerGUI Pro Script Editor from Quest Software for all of my scripting needs. I highly recommend it. You also have to make sure you have VMware Vsphere PowerCLI installed on the workstation that you’re running the script on. Also, make sure you have the PowerCLI libraries loaded. Secondly, you have to run your PowerCLI commands with appropriate rights. Check out my post on how to do that for programs by default (especially if you have a different account for admin rights). Thirdly, you have to connect to a server, which Damian’s post completely skips, because he and his readers know what they’re doing, unlike me. Finally, I had the challenge of having a large number of *nix systems that I have to skip since *nix admins are all picky about their software‚Ķ.. so, Damian’s original script just goes out, finds all VMs, and changes the setting. Mine needed to be a bit pickier. One of the comments in Damian’s post had a check in there to give me the command to single out Windows guests, so I put it all together, and this is what I got:

First connect PowerCLI to your VCenter server with this command:

Connect-VIServer -Server 10.10.10.10

Where, 10.10.10.10 is the IP of your VCenter server. You can connect to each host individually I believe, but if you have more than one host that doesn’t make sense if you have VCenter. This will prompt you for credentials, unless you launched the PowerShell instance with the user with appropriate permissions, then it will just connect you:

Name        Port User
—-¬†¬†¬†¬†¬†¬†¬†¬†—- —-
10.10.10.10 443  DOMAIN\user

You could get a warning about an invalid certificate, a lot like you do when you connect with VSphere Client for the first time. You can turn off that warning with this command:

Set-powercliconfiguration -InvalidCertificateAction Ignore

Here is the script that I ran to change the setting on all of my Windows Guests:

Get-VM | Get-View | ForEach-Object{
Write-Output $_.name

if ($_.config.tools.toolsUpgradePolicy -ne “upgradeAtPowerCycle” -and $_.Guest.GuestFamily -match “windowsGuest”){
$vm = Get-VM -Name $_.name
$spec = New-Object VMware.Vim.VirtualMachineConfigSpec
$spec.changeVersion = $vm.ExtensionData.Config.ChangeVersion

$spec.tools = New-Object VMware.Vim.ToolsConfigInfo

$spec.tools.toolsUpgradePolicy = “upgradeAtPowerCycle”
$_this = Get-View -Id $vm.Id 
$_this.ReconfigVM_Task($spec)

Write-Output “Completed”
}
}

The only real difference between this script and the one Damian wrote is that mine will check if the Upgrade at Power Cycle flag is not already enabled, and then check if it is a Windows guest. If both conditions are true, then it will change the setting to “Upgrade at power cycle”. I referenced this post that discusses PowerShell’s If AND statements to refine line 3 of the script above.

Thanks to Damian and the comment made by Travis that got me 99% of the way to my solution.


Check State of Service, Start if Stopped

 

Interestingly, this seemingly simple task took me a bit to track down, and get put together. You’d think this would be a common task with lots of posts about it, but most of what I found was confined to starting and stopping the service, and few had the whole “check state” part included. I found several posts, but none of them worked for me. Finally, I found this post by Ralf Sch√§ftlein that did the trick. In Ralf’s post, he is checking all VMWare services. My goal was to check and start SQL services of a particular SQL instance on a server, so I had to tweak his script ever so slightly to make it work. Here it is:

#This script uses the $SQLInstance variable to check if a particular SQL Instance’s services are running and start them if stopped.

$SQLInstance = “INSTANCENAME”

foreach ($svc in Get-Service)

{

if(($svc.displayname.Contains(“$SQLInstance”)) -AND ($svc.Status -eq “Stopped”))

{ echo $svc.DisplayName

Start-Service $svc.name

}

}

Save this as sqlservicecheck.ps1 and run it and you should be good to go. Quick note, the instance name search using .Contains is CASE SENSATIVE! That one added about 10 minutes to testing.

 


Zip, Chunk, and Transfer Files via FTP using PowerShell and 7zip

I had a unique problem and spent the last several weeks working on a script to resolve that problem. We have a client whose servers, SQL databases, and websites we are responsible for maintaining. The servers that are running everything are hosted. Our problem was, that there was no local copy of the database for backup, testing, and staging. So, my mission was to get the databases backed up offsite. My challenge was that one of these databases is 30+ GB. That is a lot of file to move over the wire. Luckily we have a VPN connection established between the two sites so I did not have to worry about security for this file transfer. If time permits, I may re-do this script with SFTP, but for now FTP will have to suffice.

I chose 7zip for my zipping and chunking because it was the easiest utility with the smallest footprint, and I got it to work via PowerShell.

I had every intention of keeping a list of my sources for this blog, but unfortunately due to the size of the database and my limited time in which to test, I lost track of all of the sites I used to put all of the pieces together that are necessary for this script. PLEASE, if you see something in this script that I took from one of your scripts (or forum responses), please leave a comment and I will happily give you credit where credit is due.

PLEASE NOTE: You have to place the PowerShell script in a completely separate folder from the files you’re processing. I did not write logic into this script to exclude .ps1 files from processing. I chose a self-describing folder: C:\DBFileProcessScript for the script and log files.

Here is the script with details surrounding what each portion of the script does:

<#
.SYNOPSIS
Zips up files and transfers them via FTP.
.DESCRIPTION
Searches the ‘DBBackup’ folder for all files older than two weeks with
the file extension .bak and moves them to a ‘Process’ folder. It then
moves all other files to a separate folder for cleanup. It then zips
the files and breaks them up into 100MB chunks for more reliable FTP file
transfer. Checks for any thrown errors and emails those errors once the
script finishes.
.NOTES
File Name : Zip_FTP_DBFiles.ps1
    Author : Matt Richardson
   Requires : PowerShell V2
First, we need to clear the error log.
#>

$error.clear()

#This portion of the script moves the files from the DBBackup folder to the
#Process folder if the file is more than two weeks old. It also moves the .trn
#and .txt files to a separate folder for cleanup later.

foreach ($i in Get-ChildItem C:\DBBackup\*.bak)
{
  if ($i.CreationTime -lt ($(Get-Date).AddDays(13)))
{
Move-Item $i.FullName C:\DBBackup\Process_Folder
}
}
foreach ($i in Get-ChildItem C:\DBBackup\*.t*)
{
 if ($i.CreationTime -lt ($(Get-Date).AddDays(13)))
{
Move-Item $i.FullName C:\DBBackup\Old_TRN_Logs
  }
}

#This portion of the script sets the variables needed to zip up the .bak files
# using 7zip. The file query portion of this section makes sure you’re not
# accidentally getting anything other than the .bak files in the event someone
# puts other files in this folder.

$bak_dir = “C:\DBBackup\Process_Folder”
$file_query = “*.bak”
$archivetype=“zip”

#Alias for 7-zip – needed otherwise you get Parse Errors. I had to move the 7z.exe
# file to both the Program Files and Program Files(x86) folders for this to work.
# I know I could have probably noodled with the script a bit more so that this
# wasn’t required, but I haven’t gotten around to that.

if (-not (test-path “$env:ProgramFiles\7-Zip\7z.exe”)) {throw “$env:ProgramFiles(x86)\7-Zip\7z.exe needed”}
set-alias sz “$env:ProgramFiles\7-Zip\7z.exe”

#Change the script so that is running in the correct folder.

cd $bak_dir

#This section chunks up the files and then deletes the original file. I had to do
# the removal for lack of space issues. I would recommend moving this part to the
# end assuming you have space.

$files=get-childitem . $file_query | where-object {!($_.psiscontainer)}

ForEach ($file in $files)
{
¬†¬†¬†$newfile = ($file.fullname + “.$archivetype”)
¬†¬†¬†¬†sz a -mx=5 -v100m ($file.fullname + “.$archivetype”) $file.fullname
    Remove-Item $file
}

#This cleans up the tran and txt logs since we’re not copying them offsite.

Remove-Item c:\DBBackup\Old_TRN_Logs\*.t*

#This portion of the script uploads the files via FTP and tracks the progress,
# moving the failed files to a separate folder to try again later. The try
# again later part is yet to be written so for now I do it manually on failure.

foreach ($i in Get-ChildItem “C:\DBBackup\Process_Folder”)
{
¬†¬†¬†$file = “C:\DBBackup\Process_Folder\$i”
$ftp = “ftp://username:password@ftp.server.com/$i”

“ftp url: $ftp”

        $webclient = New-Object System.Net.WebClient
        $uri = New-Object System.Uri($ftp) 

¬†¬†¬†¬†¬†¬†¬†¬†“Uploading $File…”

        $webclient.UploadFile($uri, $file)
$? |
Out-File -FilePath “c:\DBFileProcessScript\$(get-date -f yyyy-MM-dd).txt” -Append

¬†¬†¬†¬†¬†¬†¬†¬†if ($? -ne “True”)
{
           Move-Item $file c:\DBBackup\Retry
         }
        else
        {
        continue
        }
}

#This portion cleans up the process folder.

Remove-Item c:\DBBackup\Process_Folder\*

#This portion sends an email with the results and any errors.

send-mailmessage -to “alerts@company.com” -from “report@company.com” -subject “File Transfer Complete” -body
“The weekly file transfer of the Database files has completed. If there were errors, they are listed here: $Error”
-smtpserver smtp.company.com

My next challenge was that this job had to run on a schedule. Since it takes approximately 5-6 hours to zip and transfer 30GB worth of database, I obviously wanted to run it during off-hours. I compiled it into an .exe and scheduled it to run at midnight using Task Manager. Unfortunately, the SQL backups were also set to run at midnight, and this script trying to run at the same time as the backups caused the server to lock up and go offline for about 20 minutes. I figured I could safely schedule it for 3 or 4 a.m., but I wanted it to start as soon as possible. So, I write a TSQL script to call this one and edited the maintenance job in SQL to run the PowerShell script upon completion of the backups. This gave me two advantages. One is that it would run immediately after backups were complete maximizing my off-hours time. Two was that if for any reason the backups failed, it wouldn’t run and delete transaction logs and clean up files that may still be needed after a failed backup.

Here is the TSQL Script I found and modified:

EXEC
sp_configure
‘show advanced options’, 1
GO
— To update the currently configured value for advanced options.
RECONFIGURE
GO
— To enable the feature.
EXEC sp_configure ‘xp_cmdshell’, 1
GO
— To update the currently configured value for this feature.
RECONFIGURE
GO
EXEC xp_cmdshell ‘powershell.exe -Command “C:\DBFileProcessScript\Zip_FTP_DBFiles”‘

As I am relatively new to TSQL scripts, I honestly don’t know if the first four commands are necessary to execute every time, but I don’t think it would hurt to re-apply them every time even if it is a persistent setting.

Next is the script to rehydrate the files on the far end.¬† I’ll post that once I am finished with it.


Exchange 2007 Catalog (a.k.a. Index) Maintenance

If you don’t want to hear the back story, go ahead and skip down to ‘Rubber, Meet Road’ section, I tend to be windy. Is it even called windy when you type incessantly? I digress‚Ķ.

So, we’re in the middle of an Exchange migration. As most of you know, sometimes these migrations can be painfully slow, suffer setbacks, delays, etc. So, to that end, we had to move mailboxes off of server A and over to server B due to disk storage issues. Sadly, server A doesn’t have enough room on the drive to do an offline defrag after moving the email to server B. Needless to say, the drive is REALLY low on space, I’m talking under 1GB out of 600GB low. I monitored the disk carefully for a while, making sure that the almost 400GB of whitespace in the databases I have freed up is doing the trick keeping the drive from filling up. After a week or two, things are looking good with no substantial decrease in disk space, which was what I expected. Alas, a month later (yes, the migration was only supposed to be delayed by a few weeks, but again, we all know how that goes) I get more disk space alerts, and about 100 more MB of space had been chewed up. I filter the event log for event ID 1221 which tells me that I have ample whitespace remaining in the two databases‚Ķ. So what is taking up my space? Following best practices, log files are on a different drive, so I know that isn’t the culprit. So, as you likely already figured out from the title, it was in fact the Exchange CatalogData-<GUID> folder(s). There were two folders, one for each database, and their total size was 15GB. Now, that is a lot of index for only 300GB of mail. I knew that even 10GB would likely buy me the space I needed to ride this migration delay out and not have to perform any unnecessary maintenance and downtime.

In Exchange 2007 the CatalogData-<GUID> folder is located in the same folder as the mailbox database by default. I had initially set out to move the CatalogData-<GUID> folders as I have some other drives on that server with some space. One of the first few links returned on my search sent me to this article by Vidad Cosonok. Vidad’s article was a quick ‘how to’ on freeing up space in short order on a filled up drive to buy some time. I read through it, and it looked easy enough, and it also didn’t look like it was going to cause any downtime so I gave it a go. Right around 5 p.m. (to be on the safe side) just like the article said, I stopped the Microsoft Exchange Search Indexer service, deleted two CatalogData-<GUID> folders with a hard delete (shift-delete) and restarted the service. Bing Bang, just like that, I had 15GB of free space on the drive, no errors, and the service had re-created the folders and was re-indexing the two databases. It is important to point out, that if a user tries to run a search on their email while the index is being rebuilt, it will take an extremely long time and may return a “No Items Found” when the item is actually there (false negative). It might behoove you to do this after hours, and depending on your user tolerance, you might consider doing it over a weekend as it will tax the server during the rebuilding of the index. The next morning, I checked the CatalogData folders and their combined size was 1.5GB. How about that! This leads me to believe that Microsoft Exchange 2007 does zero maintenance on catalog files, and an administrator should probably add it to their best practices and list of yearly maintenance tasks to rebuild the indexes. I would also encourage you to do this shortly after a migration on the source server, as is the case in this scenario, assuming you need the space. I monitored the index over the next couple of days and found the CatalogData-<GUID> folder did not increase much, which suggests to me that it is done with the initial indexing pass on the database and is simply keeping up with new email.

Rubber, Meet Road

Initial Problem: Free Space on Exchange 2007 Database Drive

Initial Solution: Move CatalogData-<GUID> File(s)

Actual Solution: Regenerate Exchange 2007 Database Index to flush old index files and return space to the OS.

Steps Taken:

  1. Stop Microsoft Exchange Indexer Service
  2. Delete CatalogData-<GUID> folders*
  3. Start Microsoft Exchange Indexer Service
  4. Monitor for full re-index to complete.

It is important to point out, that if a user tries to run a search on their email while the initial re-index is occurring, it will take an extremely long time and may return a “No Items Found” when the item is actually there (false negative). It might behoove you to do this after hours, and depending on your user tolerance, you might consider doing it over a weekend.

One more quick disclaimer, I bugged my Exchange Guru colleague Robert Durkin and asked about the application of this situation to Exchange 2010, and he said that while there are some differences with the distribution of the catalog when a 2010 mailbox database is in a DAG, in¬†general the process should be the same in 2010 as mentioned by the¬†article I linked above by Vidad Cosonok.¬† I did not test this on 2010 and cannot speak to its applicability, so as always I’d recommend testing first.


Manually Remove a Service with PowerShell

From time to time, you’ll be faced with a piece of software whose uninstall is poorly written, a virus or malware, or a freak power failure during an uninstall. In instances like these, you might have to remove an orphaned service in Windows. In my particular case, our old monitoring software was Zenith Infotech, and their software left behind two services that can really booger up an Exchange server if you don’t get rid of them.

The first thing you need to do is open up Server Manager, and drill down to the server’s services and get the name of the service(s) you need to remove by right clicking on the service and selecting properties from the sub menu:

At this point, I personally opened up regedit and verified the location of the service in the registry for sanity’s sake:

Now we have the information we need to delete the service. If you have just one service on one server, then you can just delete the service’s registry key from Registry Editor and be done with it. Since I have over¬†90¬†servers I need to do this for, I strung together these PowerShell commands to remove these services.

The first thing I decided to do was stop the service, just in case it was actually trying to do something to the OS:

Stop-Service SAAZappr

The next command identifies the registry key to be removed (everything after the HKLM: part as it appears in the bottom of the Registry Editor window highlighted above) and removes it, and by adding the ‚ÄďRecurse switch, we’re also telling it to automatically remove all of its sub-containers, keys and parts. For good measure, I tagged ‚ÄďForce on the end in the event some sort of permissions issue decided to rear its ugly head:

Get-ChildItem HKLM:\SYSTEM\CurrentControlSet\Services\SaaZAppr | Where-Object {$_.PSChildName -ne¬†‘CLSID’} | Remove-Item¬†‚ÄďForce

Finally, I took the data from the “ImagePath” section of the Registry Key and made sure I deleted all of the folders, subfolders, and files etc. from the server that were also potentially left behind, also using the ‚ÄďRecurse and ‚ÄďForce switches:

Remove-Item¬†“C:\program files\SAAZOD”¬†-Recurse¬†‚ÄďForce

The last thing I did was compile the script into an .exe to ease deployment on all of my servers. I complied into an .exe using PowerGUI Pro.

So, the final script, removing both of the SAAZ services, covering both 32 and 64 bit installations, looked like this:

# SaaZ Services Killer

# Written by Matt Richardson

# 02/14/2012

Stop-Service SAAZappr

Stop-Service SAAZapsc

Get-ChildItem HKLM:\SYSTEM\CurrentControlSet\Services\SaaZAppr | Where-Object {$_.PSChildName -ne ‘CLSID’} | Remove-Item¬†-Force¬†

Get-ChildItem HKLM:\SYSTEM\CurrentControlSet\Services\SaaZapsc | Where-Object {$_.PSChildName -ne¬†‘CLSID’} | Remove-Item¬†-Force

Remove-Item “C:\program files\SAAZExmonScripts”¬†-Recurse -Force

Remove-Item¬†“C:\program files\SAAZOD” -Recurse -Force

Remove-Item¬†“C:\program files (x86)\SAAZExmonScripts”¬†-Recurse -Force

Remove-Item¬†“C:\program files (x86)\SAAZOD” -Recurse ‚ÄďForce

This script will get an error every time since it is trying to delete both 32 bit and 64 bit installation folders, but the errors are ambiguous and don’t stop the script from completing so I didn’t see the harm in it or the value in building in the logic to identify the version and delete accordingly.


Clean Up Orphaned Calendar Items in Exchange 2007

Updated 3/29/2012

A common problem I’ve read about, and personally experienced, is deleting a user and their mailbox, only to find out later that they had a recurring calendar meeting in a conference room, or they were an administrative assistant, or something like that. This will cause orphaned calendar items that can be a pain to clean up. When I recently ran into this problem, I noodled around trying to find an answer. I found forums on Microsoft’s site, Experts-Exchange, etc. with nothing that was really helpful. Finally, I hit up a peer of mine, Robert Durkin. Robert sent me this link of a post by Dominic Savio that got me going in the right direction.

Dominic’s post covered the basics and had the information I needed, but it still required some playing to get what I needed. So, I ended up with three commands to clean up old orphaned calendar items:

Command 1:

Export-Mailbox -Identity <user alias> -SenderKeywords “deleted_user@company.com” -IncludeFolders “\Calendar” ‚ÄďDeleteContent

This command will delete all calendar appointments originating from the deleted user in a single target mailbox using that deleted user’s email address. This will ensure that only calendar appointments from that user will be deleted since we’re A) using a unique string to identify the appointments and B) specifying the Calendar folder. Be careful if you’ve added the departed user’s email as an alias to another account because I didn’t test that and I am not sure what those results would be.

Command 2:

Get-Mailbox | Export-Mailbox -SenderKeywords “deleted_user@company.com” -IncludeFolders “\Calendar” ‚ÄďDeleteContent

This command will delete calendar appointments originating from the deleted user in every mailbox, in the unlikely event they were meeting happy.

Command 3:

Get-Mailbox -Filter {CustomAttribute14 -eq ‘ResourceMB’} | Export-Mailbox -SenderKeywords “deleted_user@company.com” -IncludeFolders “\Calendar” ‚ÄďDeleteContent

I borrowed the filter portion of my last post in this command to delete the appointments originating from the deleted user in every mailbox whose Custom Attribute 14 is set to ResourceMB. I went ahead and entered this custom attribute for every conference room, projector, and video cart we have so that I can clean them all up with one command.

You can also use the ‚ÄďTargetMailbox parameter to redirect items to a separate mailbox instead of delete them in the event of a disaster. The full list of parameters for TargetMailbox is located here.

Quick note.¬† I ran into a scenario where the user’s account was already deleted, so when I would run the command, it didn’t do any clean up.¬† When I checked the appointment, I saw that the user had ‘No e-mail address exists for this person’ in the properties in Outlook.¬† Since this was the case, the¬†command using the email address obviously didn’t work.¬† I replaced the email address with the displayed user name in the appointment and it worked like a champ.¬† The modified command looked something like this:

Get-Mailbox -Filter {CustomAttribute14 -eq ‘ResourceMB’} | Export-Mailbox -SenderKeywords “Lastname, Firstname” -IncludeFolders “\Calendar” ‚ÄďDeleteContent

Be sure to get the ‘Lastname, Firstname” value from what is displayed in the orphaned appointment.


Select and Clean Out Exchange 2007 Mailbox using a Custom Attribute

We were getting slim on storage on our Exchange server so I had to knock out some cleanup today. I saw that there were several service accounts with 40K messages or so apiece in them. Not sure how common these mailboxes are, but some are dumping grounds for NDRs, network/service alerts, or in our case, mail flow monitoring accounts. So, I composed this simple command to clean out the inbox of those service accounts. This script is for 2007, not 100% how different the script would be in 2010 since I know they made some significant changes to the 2010 Export-Mailbox command, but this worked like a dream in 2007:

Export-Mailbox -Identity ‘service-mailbox’ ‚ÄďDeleteContent

This quick script will delete all messages from the mailbox with no backup. Of course, you have to replace ‘service-mailbox’ with the alias of your service mailbox.

Now, I took it one step further. In order to repeat this process on a semi-regular basis, I added an ‘svc’ value to Custom Attribute 15. I then wrote this command to select these accounts and clean them out:

Get-mailbox -Filter { CustomAttribute15 -eq ‘svc’} | Export-Mailbox ‚ÄďDeleteContent

I personally ran the above command with a ‚ÄďWhatIf at the end of it to make sure I was going to be cleaning out the mailboxes I had intended to clean out.

I now have a command I can schedule to run monthly to keep these mailboxes under control.