Validate the Hash or Checksum of a Downloaded File

Every once and a while I want to validate the hash of a downloaded file. Most of the time these are MD5 hashes, but I’ve seen SHA as well.

Windows actually has a couple built in ways of generating the hash of a file. You can use certutil or, in powershell you can use get-filehash.

With Certutil

To verify a checksum with certutil use the following command: certutil -hashfile {FILENAME} {ALGORITHM} replace the FILENAME and ALGORITHM with your choices.

With get-filehash

This is my preferred method. No reason, maybe I like the order of arguments better?

Use get-filehash -algorithm {ALGORITHM} {FILENAME}.

That’s it!

Clean up Previously Merged Git Branches

First things first, I tend to build up a lot of local branches. No, I don’t pro-actively remove them from my machine. (Who does that extremely smart thing? 🙄) At work we use GitHub and Windows. As such I use posh-git to perform almost all my git interactions (you should too cause it’s 👌).

Our current process is:

  1. Create Feature Branch
  2. Create Pull Request
  3. Squash and Merge Pull Request
  4. Delete Feature Branch from Remote

Any developer can have any number of feature branches out at a time waiting to be merged.  As we are using a “Squash and Merge” process our branches are not *actually merged*. We delete the original branch after the squash.

But that doesn’t really matter to you. Come to think of it, it doesn’t really matter to future me. What does matter is, how can I remove local git branches that aren’t on the git remote anymore? (Remote meaning GitHub, Gitlab, Bitbucket, etc…). No, we aren’t talking about the remote tracking branches that are removed by pruning. Otherwise, one could just prune, no?

I continuously search for this, so I’m documenting it here so I NEVER HAVE TO SEARCH EVER AGAIN 😤

First of all, the answer to the question comes from this question on StackOverflow.

git checkout master; git remote update origin --prune; git branch -vv | Select-String -Pattern ": gone]" | % { $_.toString().Trim().Split(" ")[0]} | % {git branch -d $_}

If you are in a position where you Squash & Merge then you will need to replace the git branch -d with git branch -D. This will successfully remove all “:gone” branches from your local machine.

If you are using linux, and not powershell, you can use the following command taken from the same question on StackOverflow.

git branch -r | awk '{print $1}' | egrep -v -f /dev/fd/0 <(git branch -vv | grep origin) | awk '{print $1}' | xargs git branch -d

But Why?!

Any git branch you have checked out essentially has three branches.
1. The remote branch on GitHub
2. The remote tracking branch on your machine
3. The local branch you do your work on.

When we push a feature branch live (merge a pull request) we delete the “remote branch on GitHub” (#1 above).

When we run `git fetch origin –prune` we remove the remote tracking branch (taking care of #2) above.

However, taking care of #3 above often requires manual clean up. That’s where the piped command above comes into play, it takes care of cleaning up branch #3.

Include Surrounding Lines with Grep

I find it extremely useful to include surrounding lines when I’m searching through log files or whatnot for a string of text. It certainly helps provide some context as to what I’m looking at.

However, I constantly forget the flag for including surrounding lines. So I’m posting it here so that, at least when I forget, I know where to find it 😮

The flag is -C. I suppose it should be easy to remember since I want “Context” and “Context” begins with “C”. 🤔

Below is a quick example for just in case you want the whole command or you enjoy copying and pasting all the things – hey… no judgement here.

grep -C 5 "[5aa027aeb0ebb]" my_special_log.log

Debug Chassis.io WordPress with Visual Studio Code

I just recently got Visual Studio Code hooked up with the virtual Vagrant machine hosting my local dev version of WordPress. I’m posting the steps I took here. In the end it’s fairly simple to do.

Most of the guides out there show you how to hook up VS Code with a locally running copy of WordPress. However, I’m using Chassis.io for my dev version of WordPress. Chassis.io makes use of Vagrant and a virtual machine. I did not find anything that showed me how to hook VS Code with a copy of WordPress running on a virtual machine, as is the case with a Chassis.io setup.

This post assumes that you’ve already setup the Chassis XDebug extension.

Setup Chassis for Debugging with Visual Studio Code

The first thing we need to do to setup the Chassis XDebug extension to work with Visual Studio Code is to setup the IDE Key. Setting up the IDE Key consists of two steps.

  1. Set the IDE Key in the Chrome XDebug Helper extension. (You should have this extension if you followed the Chassis XDebug Extension setup guide)
  2. Set the IDE Key for the Vagrant machine.

Set the IDE Key in the XDebug Helper Extension

Bring up the XDebug Helper extension options page. You can do this by Right Clicking the extension icon and selecting Options.

XDebug Extension Options
XDebug Extension Options

Find the section for the IDE Key. Select Other as the default sessionkey and type in VSCODE.

XDebug Extension IDE Key Setting
XDebug Extension IDE Key Setting

Save it. Next we need to set the IDE Key for the Vagrant machine.

Set the IDE Key for the Vagrant Machine

This step is pretty simple. First you need to navigate to the root Chassis directory. Mine is located at C:\projects\chassis.

  1. Create a config.local.yaml file if one doesn’t already exist.
  2. Add ide: VSCODE to the config.local.yaml file.
  3. Run vagrant provision which should update the settings on your local vagrant machine.

To confirm that the IDE Key is indeed VSCODE see the “xdebug” section on the PHPInfo page for the machine.
Example: http://vagrant.local/phpinfo.php

xdebug section on the PHP Info page
xdebug section on the PHP Info page

Setup Visual Studio Code for Debugging with Chassis

If you are using Visual Studio Code to develop PHP than you should install the PHP Extension Pack. Bring up the VS Code Extensions menu and search for “PHP Extension Pack”. This extension will include the PHP Intellisense plugin and the PHP Debugger plugin. You will need the PHP Debugger plugin for debugging.

Next we need to setup a debugging configuration.

VS Code Debugging Window
Click the Gear to setup a Debugging configuration.

  1. Bring up the VS Code debugging window.
  2. Click the “Gear” icon.
  3. Select “PHP” as your environment from the popup textbox.

Now you will see a “launch.json” file in your VS Code window. This contains some default settings for debugging PHP. The file will not work for our purposes as it is. We need to add a couple properties to the JSON to hook VS Code up with our WordPress site.

  1. serverSourceRoot – This is the directory for your code on the server (Chassis.io).
  2. localSourceRoot – This is the directory for your code on your development machine.

The serverSourceRoot needs to be the path to your source code on the server. In my case the value is /vagrant/content/plugins/my-awesome-plugin.

The localSourceRoot is used to match the server source up with your local source. In my case I set this to ${workspaceRoot} which is a special variable referring to the path of the opened folder in VS Code.

In the end my launch.json file looked like this:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Listen for XDebug",
            "type": "php",
            "request": "launch",
            "port": 9000,
            "serverSourceRoot": "/vagrant/content/plugins/my-awesome-plugin",
            "localSourceRoot": "${workspaceRoot}"
        },
        {
            "name": "Launch currently open script",
            "type": "php",
            "request": "launch",
            "program": "${file}",
            "cwd": "${fileDirname}",
            "port": 9000
        }
    ]
}

All Done

Alright! That should be it. Save your launch.json file, set a breakpoint in your code, and start the debugger. When you visit the relevant WordPress page on your Chassis box you will notice your breakpoint is hit.

VS Code Caught Breakpoint
We caught a breakpoint!

Connect to a Chassis.io Vagrant Hosted WordPress Database

Chassis.io is an excellent tool to get you quickly setup for WordPress development. Barring any timeout issues, the setup is typically as simple as following their QuickStart guide.

Chassis.io uses Vagrant and VirtualBox to setup a Virtual Machine that hosts your WordPress site. This post covers how you can connect to your WordPress database that exists on that Virtual Machine. I’ll be using Windows and HeidiSQL for the purpose of this post. The connection information I use in this post comes from this GitHub issue.

Connecting with HeidiSQL

HeidiSQL is my favorite query browser for MySQL and MariaDB databases. I like the layout and the interface is nice and clean.

When you first open HeidiSQL you will see the interface for creating a new Database connection.HeidiSQL Session Manager
Choose whichever name you want to help you remember what this connection is for. I’ve named mine “Chassis” because it’s my connection to the database Chassis.io setup. You’ll also want to set the following settings:

  • Network type: MySQL (SSH tunnel)
  • Hostname / IP: localhost
  • User: wordpress
  • Password: vagrantpassword
  • Port: 3306

That’s it for the basic settings. Now for the SSH Tunnel settings.

HeidiSQL – Plink.exe and Private Key

HeidiSQL uses a utility called “plink.exe” for it’s SSH capabilities. plink.exe is made by the same author who wrote PuTTY (which I’m sure you’ve heard of). If you haven’t got plink.exe downloaded you can find the latest exe on this page. You’ll want to grab both plink.exe and puttygen.exe. I stuck both utilities inside a “PuTTY” folder in my Program Files (x86) directory. You can stick them wherever you want to.

Ok, before we setup the SSH Tunnel settings we are going to want to setup the Private key file that plink.exe will use to communicate with your Virtual Machine. PuTTY utilities use specific private key files called .ppk files. We are going to want to convert the Vagrant provided private key file to a .ppk file for use by plink.exe. Luckily, the puttygen.exe utility you downloaded makes this conversion simple.

Launch puttygen.exe. This will launch the “PuTTY Key Generator”. Load in the Vagrant provided private key file by using File > Load Private Key. Navigate to the location of your Vagrant private key file. Mine was located in C:\projects\chassis\.vagrant\machines\default\virtualbox. Your location may be different depending on where your Chassis project is. Find the “private_key” file and open that. The PuTTY Key Generator will take care of loading the key in for you. You should see a “Successfully imported foreign key …” message. Now click “Save private key”, choose a name for it, and save it. I just saved it exactly where the other private_key was.

PuTTY Key Generator
Location of the “Save private key” button

Woot! Now we can fill out the HeidiSQL SSH tunnel settings. Remember where you saved that .ppk file because you’ll need it for this next step.

HeidiSQL – SSH Tunnel Settings

Click on the tab for “SSH tunnel” to access the HeidiSQL Session Manager SSH Tunnel settings.

HeidiSQL SSH Tunnel Settings
HeidiSQL SSH Tunnel Settings

Alright, let’s plug in the values!

  • plink.exe location: Insert the path to your plink.exe utility.
  • SSH host + port: localhost and 2222
  • Username: vagrant
  • Password: just leave this blank
  • plink.exe timeout: default is fine
  • Private key file: Path to the .ppk file we created above
  • Local port: 3307 is fine

Now we come to the moment of truth. Push the “Save” button on the HeidiSQL session manager to save your changes. Now push the “Open” button and HeidiSQL should connect to your Vagrant hosted WordPress database. Woot!

Chassis.io Timeout Issue

TL:DR -> Try enabling Virtualization in your BIOS.

I’m trying out http://chassis.io as a way to easily setup a WordPress development environment on Windows. It’s actually quite easy and everything works almost exactly like the Chassis Get Started guide describes.

However, I ran into a timeout issue when attempting to boot up the Virtual Machine using vagrant up. On first run the process installed necessary dependencies and wired most things up. However, it hung for a considerable amount of time when booting up the virtual machine. Eventually it told me that it had timed out. It didn’t start the virtual machine.

VT-x/AMD-V hardware acceleration is not available on your system

Hrmm… I wonder why it’s timing out. Chassis.io uses Vagrant and VirtualBox. So I spun up VirtualBox to see if I could manually start the VM myself. As it turns out, I could not. VirtualBox threw up the following error:

VirtualBox - Error
VT-x/AMD-V hardware acceleration is not available on your system. Your 64-bit guest will fail to detect a 64-bit CPU and will not be able to boot.

Well, that’s nice… (Hint: it’s not nice).

First Try: Disabling Hyper-V

I did some searching. I found a number of posts that indicated the solution was to disable Hyper-V. It sounds like this works for a lot of people. Scott Hanselman actually wrote up a post about how to “Switch easily between VirtualBox and Hyper-V with a BCDEdit boot Entry in Windows 8.1“. I tried this approach. It did not work for me (you can remove a bcdEdit entry using bcdedit /delete {ENTRYGUID} btw).

Second Try: Enabling Virtualization via BIOS

During my search I stumbled upon this SuperUser answer. The answer indicated that, depending on your system, Virtualization could be enabled via the BIOS.

In my case, enabling Virtualization via BIOS involved booting to the UEFI Firmware Settings. I’ve outlined the steps below.

  1. Hold down the Shift key while you click Restart. This will cause your computer to bring up a special menu.

    Hold Down Shift and Restart
    Hold down “SHIFT” and click Restart
  2. Next you need to navigate the option screens to find “UEFI Firmware Settings”
    1. Select “Troubleshoot”
    2. Select “Advanced options”
    3. Select “UEFI Firmware Settings”
    4. Restart

    Steps to UEFI Firmware Settings
    Steps to UEFI Firmware Settings
  3. This will reboot you into your PC’s UEFI settings which looks a lot like a typical BIOS menu.
  4. Enable Virtualization
    Your system may be different. My system had a “Virtualization” setting located under the “Security” tab. Once I located the “Virtualization” setting I noticed that “Intel (R) Virtualization Technology” was indeed set to Disabled. I enabled it, saved the setting, and restarted my machine.

    Enable Virtualization via BIOS
    Enable Virtualization via BIOS

After enabling “Virtualization” I tried to start the VirtualBox VM one more time. BOOM. It worked. I ran vagrant up via a ConEmu console and… success.

In Conclusion

Chassis.io is a pretty sweet project. If your system is setup correctly then Chassis.io “just works”. In my case my system needed “Virtualization” enabled via a UEFI Firmware Setting.

Have a stupendous day! 🙂

Entity Framework: Update-Database Migrates the Wrong DB

Recently I made the switch from using Visual Studio 2015 to using Visual Studio 2017. For the most part the transition was easy. However, I ran into an issue with Entity Framework updating the wrong database. I’m posting the solution here so I don’t forget 🙂

TL:DR
If you are experiencing issues with Entity Framework then check that your startup project is the correct one.

EF Update-Database Is Not Working

My current setup involves using a local SQL Server Express database. I check the database via SQL Server Management Studio (ManStu) when I run Update-Database to ensure my changes take place. When I run Update-Database from Visual Studio 2015 the changes are reflected in the database. When I run Update-Database from Visual Studio 2017 the changes are not reflected in the database.

Why does Update-Database work correctly in Visual Studio 2015 but not correctly in Visual Studio 2017? Why does Visual Studio 2017 tell me that the changes were applied successfully?

I decided to take a look at the output of Update-Database -Verbose to see if it yielded any helpful information. There I saw:

Target database is: 'MySpecialDB' (DataSource: (localdb)\v11.0, Provider: System.Data.SqlClient, Origin: Convention).

Entity Framework was using (localdb) and not the SQL Server Express database I setup in the app.config. That explains why the changes were applied successfully. However, why was Entity Framework using the wrong database?

The Not So Thrilling Simple Solution

I pursued a number of different routes looking for the solution to this issue. In the end the solution is so simple. The wrong startup project was selected. That’s it. In Visual Studio 2015 I was using a different startup project. In Visual Studio 2017 I never setup a startup project and so one was selected automatically.

As it turns out Entity Framework pulls meaningful information (like database connection information) out of the startup project. The fact that I had the wrong startup project selected in Visual Studio 2017 was the reason why my Entity Framework Update-Database commands were not working the way I expected.

So, lesson learned, if you are experiencing issues with Entity Framework then check your startup project. It could be that you have the wrong startup project selected 🙂

Fixing UI-SREF-ACTIVE – Specifying a Default Abstract State

I’ve recently begun working with Angular and by extension Angular UI-Router. The fact that you are reading this means that you likely have as well. That said, let’s all pause for a moment and cry together. I know it’s hard. You will get through it. It will be ok. We can do this.

Basic ui-sref-active Usage

One of the things that UI-Router gives you is the ability to add a class to an element if that elements state is currently active. This is done via the ui-sref-active directive.


<ul class="nav navbar-nav" ng-controller="navController">
   
   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="home">Home</a></li>

   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="notHome">Not Home</a></li>

</ul>

So above we have some basic navigation with two states. The home state and the notHome state. The ui-sref-active directive takes care of adding the active class to whichever li contains the state that is currently active.

The Problem with Abstract States

The problem is that the ui-sref-active directive does not work correctly (or as we expect) when the parent state is an abstract state.

Let’s say you want to expand your “home” state a bit. Maybe you want to add a “dashboard” state and from there link to a “messages” state. You might set up your $stateProvider a bit like this.

$stateProvider
	.state("home",
	{
		abstract: true,
		url: "/home"
	})
	.state("home.dashboard", {
		url: "/dashboard",
		views: {
			"content@": {
				templateUrl: "app/home/dashboard.html",
				controller: "DashboardController"
			}
		}
	})
   .state("home.messages", {
		url: "/messages",
		views: {
			"content@": {
				templateUrl: "app/home/messages.html",
				controller: "MessagesController"
			}
		}
	});

You’ll see we’ve setup home as an abstract view. By default we want to land on our home.dashboard state. We also want ui-sref-active to set the active class on our “Home” link regardless of which child state we are on.


<ul class="nav navbar-nav" ng-controller="navController">
   
   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="home.dashboard">Home</a></li>


   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="notHome">Not Home</a></li>

</ul>

You will notice that in the code above we are now using ui-sref to link to  home.dashboard. This is where the problem with ui-sref-active crops up, it will only show the active class if the state is home.dashboard. We want the active class to appear on any child of the “home” state. As it is, the ui-sref-active directive will not detect home.messages as active. So the question becomes, “how can we fix ui-sref-active so that it detects a parent abstract state”?

Fixing ui-sref-active

The answer comes from Tom Grant in the form of a comment on a GitHub issue.

Tom informs us that there is an undocumented built in solution to this ui-sref-active problem. The solution, he says, is to “use an object (like with ng-class) and it will work”.

Code examples that Tom provides:

<!-- This will not work -->
<li ui-sref-active="active">
   <a ui-sref="admin.users">Administration Panel</a>
</li>

<!-- This will work -->
<li ui-sref-active="{ 'active': 'admin' }">
   <a ui-sref="admin.users">Administration Panel</a>
</li>

That’s it. Now we can link to children of abstract ui-router states and ui-sref-active will behave the way we expect it should.

Ubuntu Server not completing upgrade

It’s been about seven months since I setup a Wireless GitLab server. Since then I’ve figured out how to list updatable packages on Ubuntu Server. I’ve also performed several updates using sudo apt-get update && sudo apt-get upgrade.

gzip: stdout: No space left on device

Today I ran into a new problem. Upon trying to perform an update I was presented with a peculiar error. It said gzip: stdout: No space left on device and it told me to run apt-get -f install to fix things up. So… that’s what I tried doing. I tried running the apt-get -f install command but to no avail. The command would not complete successfully.

This is about the time when I start getting really annoyed with Linux and the command line and all the things associated with configuring things manually like do I really need to download the entirety of the Linux MAN files inside my HEAD? DO I NEED TO DO THAT? GAHasldkjsadljfsadfsdsdf!!!!

Calm yourself.

The /boot partition is 100% full

Ok, so it turns out that the apt-get process can fail if the /boot partition becomes 100% full. There were a number of suggestions online that indicated you needed to clean out the /boot partition by removing old linux-images that you don’t need anymore. Many of these suggestions involved using sudo apt-get remove [package-name] or using sudo apt-get autoremove which are both completely valid options… IF APT-GET WERE WORKING. But apt-get is not working, that’s the problem.

So… I Googled a lot and dug through a lot of forums. Finally I stumbled on this uber helpful answer on askUbuntu. I’ll go ahead and paraphrase the answer below so that I can easily find it again. Yes. This is all about me.

Cleaning up the /boot partition

In the case where your /boot partition becomes totally full you can use these steps to clean it up. (From flickerfly on AskUbuntu).

  1. Run the following command to get a list of the linux-image files that you don’t need anymore.
    sudo dpkg --list 'linux-image*'|awk '{ if ($1=="ii") print $2}'|grep -v `uname -r`
    
  2. Create a command to remove the folders you don’t need anymore. You can do that with a command like this (where brace-expansion is used to save keystrokes). Use the output from the command above to build your command.
    EXAMPLE
    sudo rm -rf /boot/*-3.2.0-{23,45,49,51,52,53,54,55}-*
    
  3. Now that apt-get has space to work with you can actually run sudo apt-get -f install to clean things up.
  4. Use Purge to manually resolve issues with “Internal Errors” (if you get any internal errors).
    EXAMPLE
    sudo apt-get purge linux-image-3.2.0-56-generic
    
  5. Run `sudo apt-get autoremove ` to clean up anything orphaned by the manual clean.
  6. Now you can finally proceed with those updates you were wanting to do.
    sudo apt-get update && sudo apt-get upgrade
    

Party?

We can party now I think.

List Updatable/Upgradable Packages in Ubuntu Server

A little while ago I setup a GitLab box using Ubuntu Server. When I log in to the server it shows me a short message about available updates. The message looks something like this:

Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

7 packages can be updated.
0 updates are security updates.

I know that I can update these packages by running `sudo apt-get update && sudo apt-get upgrade` however, I’d like to know what I’m updating before I do it. In the past you could accomplish this by performing a “dry-run” of the command. This essentially showed you the output of the command without actually performing any updates. That worked alright – but honestly, I just want a list of the packages – not the entire output of the command.

Listing the Upgradable Packages

I stumbled upon this answer (made just a few days ago) by AskUbuntu user “doru“. Turns out that getting a list of updatable/upgradable packages is pretty easy. You just run this:

sudo apt list --upgradable

The list --upgradable command will list out all the packages that you can update, what their current versions are, and what the new version is. Boom! That’s exactly what I wanted.