Vim as an IDE

I finally took the time to setup Vim as an IDE on my workstations. I want this to just serve as a reference for those who are curious if they installed everything correctly. This is what my system looks like for me after each step.

Install Instructions

Move your ~/.vimrc file and ~/.vim folder somewhere else for now.

Run this command to install Vundle:

This installs Vundle.vim to the subdirectory of your ~/.vim folder. I’ll be using Vundle in upcoming steps to add plugins to make Vim into an IDE.

Download the file vimrc.vim and rename it to ~/.vimrc

Run the following command to install the plugins for Vundle and to quit when finished:

Now that I’m a plugin master, I want to…

Let’s Make A Development Environment with PowerShell

Built on the .NET Framework, Windows PowerShell is a task-based command-line shell and scripting language; it is designed specifically for system administrators and power-users, to rapidly automate the administration of multiple operating systems (Linux, macOS, Unix, and Windows) and the processes related to the applications that run on those operating systems.

A big difference between other scripting languages and Powershell is that it is fully object-based and not text-based. Therefore it is important to keep in mind that what you might see as output on your screen is only a representation of the object, but not the object itself.

Lately, Microsoft has even made PowerShell open source with their PowerShell Core iteration which is cross-platform (Windows, Linux, and macOS)  allowing you to take full advantage of automation and configuration tool/framework that works well with your existing tools and is optimized for dealing with structured data (e.g. JSON, CSV, XML, etc.), REST APIs, and object models. It includes a command-line shell, an associated scripting language and a framework for processing cmdlets.

In this post I will explain the basic steps to set up a what I would call a “sane” working environment, which arguably gives you an experience similar to a Bash shell.

In specific I will discuss the following tools/modules:

  • ConEmu
  • Environment Settings
  • Powershell Profile

ConEmu

Like many other scriptwriters and developers, I spend quite a bit of time in command line applications (Windows CMD, PowerShell, Terminal on MacOS, etc.). Unfortunately, these applications don’t offer a lot in terms of customization. ConEmu is a console emulator with tabs and panes, which is great for those who want easier multi-tasking. It’s a high customization, tabbed console emulator that lets you run any shell you want.

Install ConEmu

First step is to download the latest version of ConEmu. Pick the installer for either the latest preview or stable version.

Run the installer, choose the 32-bit or 64-bit option (depending on which version of Windows you have installed), and keep all the default options.

Configure ConEmu

Once installed, start it up. The first time you run it, you’ll be presented with a fast configuration screen. Everything here can be changed later but it’s a good place to start.

For usage with PowerShell I use the following settings in ConEmu:

  • Startup
    • Tasks
      • Select Item 5: {Shells::Powershell}
      • In the Commands field I set my environment to:
        • “C:\Windows\System32\WindowsPowershell\v1.0\powershell.exe -new_console:d:F:\GitHub\PowerShell”

  • Environment > “Set up environment variables” = “set HOME=F:\GitHub\PowerShell”

  • Features
    • Colors
      • <Monokai>

When we break down the above settings, we are ultimately telling ConEmu to ensure that a new shell or tab is spawned with the starting directory that contains my PowerShell scripts, and that ConEmu is able to run scripts that reside under this path.

Environment Settings

The PSModulePath environment variable stores the paths to the locations of the modules that are installed on disk. Windows PowerShell uses this variable to locate modules when the user does not specify the full path to a module. The paths in this variable are searched in the order in which they appear.

When Windows PowerShell starts, PSModulePath is created as a system environment variable with the following default value: $home\Documents\WindowsPowerShell\Modules; $pshome\Modules.

Only one environment variable should be set for Powershell, specifically the “PSModulePath” variable. This variable allows the usage of modules (and functions within these modules) straight from any Powershell CLI. It doesn’t matter whether it is powershell.exe, Powershell ISE, or a custom execution from a different path.

To set the variable go the following:

Source: Microsoft TechNet

PowerShell Profile

If you find yourself using it as a command line shell it may be useful to store your functions and customization’s in a profile that gets loaded every time you load the console. PowerShell also allows you to specify a set of commands which will run before spawning a new shell. This is convenient for pre-loading modules, setting aliases and setting the path of the new shell.

Typically, (especially if you are doing this for the first time on your workstation) there is not a profile file created that pre-loads modules, etc. But to be safe, I would run the following cmdlet to verify. If it comes back as False, then you know you will need to create one:

If one has been created already for some reason, and you want to create new one right away with no regard to the existing profiles specifications you can run:

Once the file has been created, opening up your favorite text editor *Cough* VS Code w/ PowerShell Extension *Cough* and enter something on the lines of:

Summary

Now we are talking! Playing around with the settings I cleaned up the default look of the console’s default settings and tabs to give it a nice clean look while maintaining the search menu in the top right hand side of the shell:

There are plenty more tweaks you could do to tailor ConEmu to your preferences (the full documentation can be found here), but I find these settings are good starting point for creating a sleek-looking, effective command line application that is light years ahead in terms of flexibility and customization for your PowerShell command line adventures.

Inspired by: Sudesh JetHoe & Mike Larah

Connecting to Microsoft Online Services via PowerShell Sessions

It can be fairly annoying to have to run several different sets of commands in your PowerShell console window just to connect to the service online that you are working on.

John Weber and I were having a discussion about how annoying it was, and he and I couldn’t help but ask: “Is there an easier way?”

Utilizing Windows PowerShell, we cumulatively came up with the idea to pass a single entering of credentials, and in turn, log into each environment we manage most in their own respective PS console environment.

I use ConEmu as my PS CLI interface of choice rather than the native console and one of the best features of ConEmu in my opinion is the tabbed console windows. I like to separate my work so I can better keep track of the commands that I am running, especially when I am running a lot of them. The idea of establishing connections in separate console windows sounded like a great idea.

Therefore, using the powers of PowerShell (pun intended) I put together a script to auto-magically do this for us.

If you would like to download the script, click the link below:

Download Link

“Playtime is over, Star Fox!” Err I mean.. “StarCom!”

Star Fox on N64… those where the good ol’ days. Not only do we have to move on from those lengthy joyful summer days playing Star Fox on N64, but also from our free SSL CA friends at StarCom.

StarCom was bought out by a Chinese CA (Wosign) and were caught backdating certificates and issuing certificates for domains that people didn’t own.  StarCom certficiates are no longer trusted in Firefox and Chrome.  They are in the process of re-issuing new root certs, but for now stay far, far away from them….

StarCom is (well, was) the only competition to Let’s Encrypt in the free certificate space. It is far and away the cheapest direct provider of wildcard certificates (which are impossible to get for free), unless you move into reseller territory. And even their free certificates last four times as long, and don’t require the use of certbot.

Certainly, Let’s Encrypt works great for a lot of peoples’ needs. But for those it doesn’t (and there’s more of them than you might think), this is seriously bad news.

A real bummer – I always liked StarCom because of their approach to charge for verification (with increasing costs for each higher trust level) but not for issuing certs (while still manually checking every cert request, at least for any OV&EV cert in my case).

I used StarCom’s certificates in my labs and even suggested them to a few customers in the past to get around those hefty price tags associated with SSL certificates.

Now that StarCom is SOL, my hand has been forced and I must renew my certificates before my browser starts to yell at me….. Let’s Encrypt, lets see what you got!

Stumbling around on the interwebs to make Let’s Encrypt work for me I found this nifty GitHub whose author titles the repository: A .NET library and client for the ACME protocol (Let’s Encrypt CA). The handy QuickStart guide served me well but I want to expand on some of the gotchas that I had ran into:

  • The Certificate is only valid for 90 days. You will have to generate a new certificate via this process below to have a valid certificate after the validity period expires.
  • The Root CA is DST Root CA X3
  • The Intermediate CA is Let’s Encrypt Authority X3
  • Signature Algorithm is SHA-256 with RSA Encryption
  • Key Size is 2048 bits
  • Valid CA in common web browsers such as Chrome, FireFox, IE etc.
  • You can have up to 100 SANs
  • Let’s Encrypt Rate Limits

1. ACMESharp Installation

  • First, install the ACMESharp PowerShell module:

2016-11-28-09_40_52-start1

  • The workstation I was running on did not like that the module was going to make the command ‘Get-Certificate’ available even though it already was. Since I am doing all this work on a throw-away VM, I chose to AllowClobber.

 2016-11-28-09_40_52-start

  • Then, per the QuickStart guide, I loaded the module:

2. Vault Initialization

  • Let’s encrypt stores your Certificates and related artifacts in what they call a Vault. To use Let’s Encrypt, you will have to start by initializing a Vault.
    • Note, if you run as Administrator, your Vault will be created in a system-wide path, otherwise it will be created in a private, user-specific location.

3. Register

  • Register yourself with the Let’s Encrypt CA:
    • Provide a method of contact, e.g. an email (note, LE does not support the tel: contact method)
    • Accept their Terms-of-Service (TOS).

4. Set your Domain Identifier

  • Submit a DNS domain name that you want to secure with a PKI certificate.
  • If you want to create a SAN certificate, you will have to do this step, 5, and 6 for each “myserver.example.com” you want to include. I recommend creating all of your PowerShell cmdlets ahead of time to ease this tedious process.

5. Prove Domain Ownership – DNS Challenge

  • The Quick-Start guide found on the ACMESharp GitHub includes 3 methods to prove domain ownership. For my sake, the easiest way to prove I owned my domain was to complete what is refered to as a DNS Challenge.
  • If you want to handle the DNS Challenge manually, use the following cmdlet and to print out the necessary instructions that you need to follow on your DNS server/service of choice. Implement the steps described in the instructions before moving on to the next step.

6. Submit the Challenge Response to Prove Domain Ownership

  • Once you have handled the Challenge using one of the methods in Step #5, you need to let the LE server know so that it can perform a verification.
  • I chose to use the DNS Challenge method, so I used this cmdlet to submit my challenge:

7. Verify the Status of the Challenge

  • Once the Challenge response is submitted, the validation usually takes anywhere from seconds to minutes to perform. I performed a check status of the validation for my domain using the following command.

  • Until the Challenge has been verified, you should see a status of pending.
  • If the Challenge fails for any reason you will see a status of invalid. At this point, you cannot re-attempt the same exact Challenge without first Submitting a new DNS Identifier (Step #4).
  • If the Challenge is successful, you will see a status of valid.

2016-11-28-18_54_14-windows-shell-experience-host

  • Once the Challenge has been successfully validated, you can check the overall status of the Domain Identifier, which should be valid as well.

8. Request and Retrieve the Certificate

  • After you have proved your ownership of the domain name you wish to secure, you can create a new PKI certificate request, and then submit it for issuance by the LE CA.

Subject Alternative Names (SAN)

If you want to generate a CSR that lists multiple names, you can use the Subject Alternative Names extension of the PKI certificate request to list multiple additional names other than the primary Subject Name. To do so you specify the -AlternativeIdentifierRefs option with a list of one or more additional Identifier references.

9. Export the Certificate

I personally ended up exporting all of the items below for my certificate to give myself the most flexibility as possible. You can export the certificate in a variety of ways such as the following:

Export Private Key

You can export the private key in PEM format:

Export CSR

You can export the Certificate Signing Request (CSR) in PEM format:

Export Certificate Issued By LE

You can export your public certificate that was signed and issued by the Let’s Encrypt CA in PEM or DER format:

Export Issuer Certificate

You can export the public certificate of the issuer, that is, the CA’s signing intermediary certificate:

Export PKCS#12 (PFX) Archive

You can export the certificate and related assets in PKCS#12 archive (.PFX used by Windows and IIS):

Final Thoughts

All in all, a fairly painless procedure to get yourself a free 90 day trusted SSL certificate for your labs and anything else you see fit so long as you can live with renewing once ever three months. Let’s Encrypt is still fairly new, and may have some exciting stuff for us in the near future as it relates to free SSL certificates. Until then, I’ll be harnessing the Powers of PowerShell.

Creating an SSH Key on macOS for Automatic Login to Linux

Lately I have been working a lot in a terminal, specifically with Linux VMs in my home lab environment. Logging into multiple VMs over and over again over SSH has become fairly repetitive, that is until I created myself an SSH key for automatic login to my VMs.

To make things easier in the future when it comes to logging in via Terminal, you can setup SSH keys, so you won’t need a password when you login.

In my lab I have been primarily working with “minimal” installations of CentOS 6 and CentOS 7. When you do a “minimal” install of CentOS, the installation doesn’t install services like rsync and scp by default.

To install rsync and scp run the following on your CentOS client:

While still connected to my CentOS client, I created a directory to store my RSA keys with the following command:

With the Terminal open on your macOS system, enter the following command to generate an RSA key for login:

Press enter three times to accept the default settings.

On my macOS client, I like to tighten up the file system permission via the following example:

To verify our permissions we can run:

On my macOS client, I see the following:

screen-shot-2016-09-09-at-11-31-06-am

Now that we have a key, the next thing to we need to do is copy the key to a directory on the server we intend to SSH to via Terminal. This will allow for password-less logins. With the IP or DNS name of the server we intend to connect to in mind, we can enter a command similar to the following to copy our key to the CentOS VM:

Like my macOS client where we created the RSA key, I will tighten up my CentOS VM SSH directories with the following commands from my macOS Terminal:

We should now be able to SSH from our macOS client to our CentOS VM without providing a password!

 

 

 

 

 

Installing PowerShell and VS Code on Mac OSX

“If someone told me 10 years ago that we (Microsoft) would offer SQL Server on Linux, I would have laughed them out of the room.” – Presenter while at Microsoft HQ Redmond, WA

It’s 2016 and Microsoft is breaking down the walls to its golden garden. With recent announcements coming from all across the board at Microsoft, it really is an interesting time. Never before have the tech giants been so open to sharing software across distributions and vendors.

In the early 1980s, Richard Matthew Stallman began a movement within the software industry, preaching that software should be free.

“Free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.”

Why would anyone in their right mind want to give away their software for free? I think the better question is: “How do we enable ourselves as developers and users, while protecting ourselves at the same time?” A user of the software should never be forced to deal with a developer who might or might not support that user’s intentions for the software. The user should never have to wait for bug fixes to be published. Code developed under the scrutiny of other programmers is typically of higher quality than code written behind locked doors. (We have been looking at you Apple and Microsoft) One of the great benefits of open source software comes from the users themselves. If a user desires or has the need for a new feature, they can add it to the original program and then contribute it back to the source so that everyone else can benefit from it. It is this sole reason why the popularity of GitHub has risen so dramatically in the last couple of years.

This line of thinking sprung a desire to release a complete UNIX-like system (Linux) to the public, free of license restrictions.

Just a couple of weeks ago Microsoft released this blog article announcing its latest “customer obsessed” move and to further convince us that “Microsoft loves Linux.” PowerShell and Visual Studio can now be ran on Linux and OSX.

It’s about time.

I have been running OSX Sierra on my 2015 13-inch MacBook Pro for as long as the public beta has been available, and now I can edit and run my PowerShell scripts directly in my Terminal.

Installing PowerShell

Firstly you will need to download the PKG package powershell-6.0.0-alpha.9.pkg from the page from here onto your macOS machine.

Either double-click the file and follow the prompts, or install it from the terminal.

Screen Shot 2016-09-04 at 1.13.35 PM

Once the package has been installed, running the command powershell puts you directly into a PowerShell session in your Mac OSX terminal:

Screen Shot 2016-09-04 at 1.12.00 PM

$PSHOME is/usr/local/microsoft/powershell/6.0.0-alpha.9/, and the symlink is placed at /usr/local/bin/powershell.

Installing VSCode

Installation

  1. Download Visual Studio Code for Mac OS X.
  2. Double-click on the downloaded archive to expand the contents.
  3. Drag Visual Studio Code.app to the Applications folder, making it available in theLaunchpad.
  4. Add VS Code to your Dock by right-clicking on the icon and choosing Options, Keep in Dock.

Tip: If you want to run VS Code from the terminal by simply typing ‘code’, VS Code has a command, Shell Command: Install ‘code’ command in PATH, to add ‘code’ to your $PATHvariable list.

After installation, launch VS Code. Now open the Command Palette (⇧⌘P) and typeshell command to find the Shell Command: Install ‘code’ command in PATH command.

OS X shell commands

After executing the command, restart the terminal for the new $PATH value to take effect. You’ll be able to simply type ‘code .’ in any folder to start editing files in that folder.

Installing PowerShell Extension

Launch the Visual Studio Code app by:

  • Windows: typing code in your PowerShell session
    • Press F1 (or Ctrl+Shift+P) which opens up the “Command Palette” inside the Visual Studio Code app.
  • In the command palette, type ext install and hit Enter. It will show all Visual Studio Code extensions available on your system.
  • Choose PowerShell and click on Install, you will see something like below

VSCode

  • After the install, you will see the Install button turns to Enable.
  • Click on Enable and OK
  • Now you are ready for editing. For example, to create a new file, click File->New. To save it, click File->Save and then provide a file name, let’s say “helloworld.ps1”. To close the file, click on “x” next to the file name. To exit Visual Studio Code, File->Exit.

For further information, check out Microsoft’s documentation on GitHub:

Managing contact objects in AD when Exchange was never there

Have you noticed that if you have never had an Exchange server in your Active Directory environment, that it becomes extremely annoying to manage contact objects? I just recently came across this nuisance.

Recently tasked with Domino Notes to Exchange Online migrations, creating contact objects of the Notes mail users that contained the relevant attributes needed to migrate presented me with a truckload of contacts to validate.

Thanks to the powers of automation, the go to cmdlet that comes to mind when opening up my PowerShell console equipped with the AD Module, is:

Psych!

Get-Contact_PS_Errorpng

But…. But Microsoft……

Turns out, if you want to manage contact objects in AD using PowerShell without the availability of the EMS, the cmdlet you actually want is:

I am sure Microsoft has their reasons, but since I wanted to manage ADSI attributes of my contacts, it left me scratching my head. How am I to bulk change attributes for contact objects using the AD Module in PowerShell?

In my case, I needed to fix the mailNickname attribute as it had been appended with the Notes users e-mail address instead of just the syntax of the username and soon to be mail user alias in Exchange Online.

Well, luckily I was able to put something together after prowling the all knowing Google for answers

Using the Get-ADObject cmdlet, I was able to target the OU containing the contacts I wanted to manage and select the Name, ObjectGUID, and mailNickname ADSI attributes for manipulation. I pipe that into a Set-ADObject for each (fun fact: the “%” sign is an alias for: ForEach-Object) of the contacts to replace the mailNickname with the whatever the mailNickname is currently set to, minus the “@” symbol and anything that follows it.

BAM! 

One task complete.

The other attribute fix identified itself as the removal of a proxy address from the proxyAddresses ADSI attribute for each contact object. For this task, I was able to target a specific OU containing all the contact objects in question and remove the bad apple proxy address with the following:

Calling in the LDAP ADSI initiator, and specifying a ForEach, I was able to get each contact (In my case, only contact objects existed in the OU’s I targeted) that had an address like “*@notes.domain.com.” Following this you’ll notice I pipe that into a ForEach contact, set the ADSI proxyAddresses attribute to PutEx 4 to the attribute proxyAddresses and gave it a value of nothing with the blank string ($_).

If that went way over your head like it did for me the first time I saw the method, don’t worry. Microsoft has a great article that clarifies this in much more detail:

HOW TO: Use ADSI to Set LDAP Directory Attributes

PowerShell managed to save me from hours of manual attribute changes after all, even when Exchange was never there.

As always, happy scripting!

Microsoft Portal Links

So many links…

With all of the changes and upgrades that Microsoft has been making in its growing and maturing cloud space, it is incredibly easy to get lost when you need to access a specific service to manage.

In my notes, I have recently compiled the most recent relevant links to the web portals I log into the most to help ease the direction in which I need to navigate to, when managing my Windows Cloud Services.

Office 365

Intune

Azure

MFA/AADRM

Microsoft Partner Network

Set the new domain as the default “Log on to” after ADMT migration

I recently worked with a client that was migrating from one domain to another using Microsoft’s Active Directory Migration Tool or ADMT v3.2.

One of the customers concerns, like many others was: “What will the impact be on the end users post-migration?”

If you are like most IT administrators, if you can help it, you probably want to minimize the amount of help desk tickets you receive on a daily basis. Migrations from one domain to another has “help desk ticket hell” written all over it.

Issue

One of the concerns associated with using ADMT as the primary migration tool of choice was how it handles the presentation to the user when they show up to their computer Monday morning after a weekend of migrations.

Unfortunately ADMT does not get super complex with its offerings but perhaps this is intended by Microsoft. After all, it is free.

That said, I encountered the following scenario:

ADMT v3.2 by design leaves the previously logged on username and their associated domain intact after migrating from a source domain to a target domain. Windows by default caches the previously logged on username, and ADMT v3.2 does not make any changes to this.

My customer had expressed a business need to help mitigate tickets to the help desk, as well as provide the most transparent experience to the end user, to have this cached domain and username changed or removed.

Basically, the customer did not want to have users show up Monday morning at their workstation that was recently migrated to the new domain, attempt to login on the machine using their previous domain credentials because Windows decided to keep the cached value of the last logged-on user.

This customer had reasons to keep the old account active in the source domain, and since everything was on the same network and subnet internally, you could imagine the headaches that could come if Bob from HR logs in using his old credentials on his now migrated workstation.

Solution

Due to the nature of ADMT and Group Policy, it was determined that the best solution to resolving this issues to meet this need was to perform the following to remove the cached last logged on username and domain:

  • In the source domain create an OU by the name of “Pre-Migration Computers”
  • Create a new GPO that enables the policy “Do not display last user name in logon screen.”
  • Prior to migration, place the computer objects that are to be migrated in the “Pre-Migration Computers” OU to allow the group policy to apply to the computer objects ahead of time.
  • In the target domain, create an OU by the name of “Post-Migration Computers”
  • Create a new GPO that enables the policy “Do not display last user name in logon screen.”
  • Prior to migration, place the computer objects that are to be migrated in the “Pre-Migration Computers” OU to allow the group policy to apply to the computer objects ahead of time.
  • At the time of the migration, migrate the computer objects from the “Pre-Migration Computers” OU in the source domain to the “Post-Migration Computers” OU in the target domain.
  • Once it has been determined by the customer that the end user has had enough time, or has been verified to have logged into their workstation post-migration with their new credentials, the computer object can be moved freely to any desired OU regardless of whether the GPO created as a part of this process is applied to it, or not.

Skype for Business Online Hybrid Coexistence – User Moves with Move-CSUser

While onsite with a customer who was having John Weber and myself setup a Skype for Business Hybrid coexistence with their previously configured Office 365 tenant, we ran into a conundrum with setting up a PS Session from one of their existing edge servers.

Due to their security measures, (i.e. various internal firewalls and proxies) we were unable to establish a PS Session from their network to SfB Online. This is because the WinRM protocol 5985 was most likely blocked.

We discovered this when trying to establish a PS Session to run the following command from one of the edge servers:

Since we were not able to run the command from their network environment, we were however, able to run the command from a hotspot connection and change this value thus establishing coexistence.

The customer wanted to be able to manage their coexistence (aka perform the user moves) from one of their edge servers, and they were concerned with how they would perform their user moves using the Move-CSUser cmdlet from their edge server considering that WinRM was not able to establish a PS Session.

With this customer, making the change requests to open this port could take weeks to get approved.

In our troubleshooting, to make matters worse, we could not telnet over port 443 from the server to the URL we were trying to pass credentials to as part of their O365 tenant when performing the steps to perform the move with the Move-CsUser cmdlet.

We ended up being able to run the Move-CsUser cmdlet regardless of all of the above, and so this caused us to contemplate how the command is perform the moves to SfB Online, so we could provide the documentation to the customer.

The only conclusion we could think of, because we could not find an answer after a few hours of research to present to the customer, was that it must have been passing the credentials to their DirSync server, and since they had already synced users to O365, the DirSync server must have been passing the credentials, and the other changes needed to effectively show the user as “In the cloud.”

It seems that for now, the way the cmdlet Move-CsUser operates is that of Microsoft “Voodoo” magic until I or someone else gets a chance to analyze network traffic being passed during a passing of the Move-CsUser cmdlet when moving users from on-premise’s to Skype for Business Online.