This took me far too long to find out, and I wasted a lot of time trying to get this to work, but 802.1x EAP-TLS authentication for Cisco phones doesn’t work with Microsoft’s NPS (or at least not easily). This is due to a combination of the way NPS verifies certificates, and the certificates that Call Manager issues to phones.
Specifically, the certificates (called LSCs) that Call Manager (using the CAPF functionality) issues to phones, are missing the CDP and AIA extensions. For some reason Cisco just didn’t include them. The issue arises when NPS goes to authenticate the phones, because NPS requires that those extensions be set so it can check if the certificate is expired.
Cisco has a bug entry about this here: https://bst.cloudapps.cisco.com/bugsearch/bug/CSCup94684/?reffering_site=dumpcr. Be aware that you need to have an account in order to see this.
I also found this forum post that set me in the right direction: https://supportforums.cisco.com/t5/ip-telephony/802-1x-eap-tls-with-cisco-ip-phone-on-ms-nps/m-p/1768995/highlight/true#M166951
I ended up opening up a support case to see if there was any way to get this to work. They said it might be possible to do this using the default (MIC) certificate that the phones have, but that would be a large hassle to get going if you need to import those into the NPS server. There also was a registry key that could be set that would have it so NPS wouldn’t verify the certificate, but that’s a bit insecure (one of those settings Microsoft says to only use in a lab), and I didn’t want to go that route.
The solution I’ll be looking at next will be another 802.1x server. Cisco’s ISE would be nice, but there is a cost to it, so I’ll first be investigating FreeRADIUS to see how it works.
I recently came across an issue where I was unable to send email as a service account as part of a scheduled task that was running a PowerShell script. I checked the SMTP receive logs in Exchange and came across the following error:
Inbound authentication failed because the client username doesn't have submit permission.
Clearly, this would be a permission issue on our receive connector. To modify those permissions, open up ADSI edit on a domain controller, and browse to the following location:
Now, right click on the receive connector you’re using, and on the permissions tab add in the following user. The error above spoke specifically about about submit permission, however I found if you didn’t get the permissions quite right, you’d get the following error:
550 5.7.1 Client does not have permissions to send as this sender.
The permissions you’d need to set are:
I was recently working on a PowerShell script that needed to write to an Excel spreadsheet on a server. I installed Office, but it wasn’t working. I had to Google for a long time to find the answer, so I figured I’d repost it. In order for PowerShell to be able to work with the COM objects there needs to be a specific directory that exists. This varies depending on whether you’re running 32-bit or 64-bit version of office. The directory is:
If that directory doesn’t exist, you’ll just need to create it.
This is one of those obscure errors that I’m sure most people won’t come across, but I encountered it recently and didn’t find much while searching for it, so hopefully this helps someone!
The gist of the error is: I couldn’t get an Authentication Agent to talk to the RSA Security server in order to authenticate. In this case the Authentication Agent was an Cisco ASA that is used for the AnyConnect VPN, which connected to the RSA Security Server using SDI. I could try to explain the setup with words, but a simple diagram with make it much easier:
Being able to use the Windows registry effectively is a very powerful tool for a Windows administrator. There are a number of ways to change settings: GPO, SCCM, logon scripts (not recommended), and of course the manual option of regedit. However, as far as I know, there isn’t really a good way to find out what registry settings are currently set on a bunch of computers. In order to alleviate that, I put together the below script to facilitate such queries:
The other day I was reviewing a script that a previous admin who had been in the role before me had written, and it contained this section that I thought was quite humorous:
PowerShell, or any other scripting or programming language, is just that, a language. Everyone has their own mannerisms and ways of speaking, and for the most part can get by just fine. However, just like traditional languages, there can be beautiful and eloquent code; a few words can be incredibly powerful (Though this is not always for good, as anyone who has had a recursive delete get away from them can attest). This often comes with a deep understanding gained through experience, which of course can take quite a while to develop.
Someone might look at the above code block and think it looks just fine (besides the somewhat funny variable name), and indeed when I first started out with programming I would likely agree. What constitutes good scripting is often times a matter of opinion, and in my mind there are a few central tenets, one of the main one’s being:
Code should be as minimal as possible without sacrificing readability
This is more a desktop support/troubleshooting story, but I figured it was a difficult enough one for it to be worth describing. Every so often one of those problems come along where it doesn’t really make much sense why things don’t work as they should, where it seems like there must be some corruption with the system itself.
First, some backstory. This issue came about because of a project to move from an old Server 2003 print server to one running Server 2012 R2. Everything went well for the most part, clients were moved over, everything was great, except for one user.. Typically with an issue like this the call would be to simply re-image and be done with it, however this user was one with a complicated system setup, that couldn’t spend the time getting back up to speed with a new computer.
There may be a better way to do this, but from what I found there’s limited options to change HP BIOS options remotely. For their servers there are some nice PowerShell cmdlets, but for desktops the options are limited. HP does have a BIOS Configuration Utility (which can be found here: https://ftp.hp.com/pub/caps-softpaq/cmit/HP_BCU.html), but it doesn’t include remote options; you’d typically need to run it interactively on the computer you want to make changes to, which is a bit prohibitive. In order to get around this I wrote a little script that will copy the executable from a shared directory, and then invoke a command, which allows you to make changes on as many computers as you want at once.
The script I wrote was tooled specifically to change BIOS options, but it can be modified to be able to change any BIOS setting available.
Writing this wasn’t too difficult, but there was one piece that gave me a spot of trouble, which was the invoke command. Because I wanted to include a variable, run it from the command line, and had to include some quotations to make the command be interpreted correctly, I had to play around with that a fair bit. There are a couple of ways to run executables from PowerShell, but I went with using cmd /c from within an Invoke-Command, partially because it was one of the least complex methods, and because I was able to get it to work. To make use of the BootOption parameter I had to include a param() declaration in the ScriptBlock along with the ArgumentList parameter. And because commas and slashes are treated differently by the PowerShell interpreter, I had to escape these using the ` character (Be aware this also escapes the newline character, so it can be used to split up a command into multiple lines to increase readability).
Recently I was working on creating new Group Policies to assign printers to users. When working on these kinds of things it’s of course very advantageous to know who is currently using which printer, so in order to determine who was using what, I wrote the script below.
I’ve lately done a fair bit of work with Group Policy at my organization, which involved cleanup, simplification, and optimization. Group Policy hadn’t really been audited for what seems to be a few years and it was quite a mess, but by the end of it there were about a quarter of the policies, and they were a lot more logical. However, during this exercise I did learn a few things and run into some quirks that I thought were worth sharing and which I’ll describe in this post. I won’t however go over the typical practices of designing and implementing GPOs, there are enough posts and articles about that elsewhere.
The first thing I came across was some old policy settings that would show up when looking at the summary in the management console, but if you were to edit the policy those options wouldn’t been there. Some of the specific settings had to do with Remote Installation Services, and printers that had been added through Print Management using a Server 2003 server. After some googling I found that these options could be changed by editing an attribute in Active Directory. To do so follow these steps: