Connect to a Chassis.io Vagrant Hosted WordPress Database

Chassis.io is an excellent tool to get you quickly setup for WordPress development. Barring any timeout issues, the setup is typically as simple as following their QuickStart guide.

Chassis.io uses Vagrant and VirtualBox to setup a Virtual Machine that hosts your WordPress site. This post covers how you can connect to your WordPress database that exists on that Virtual Machine. I’ll be using Windows and HeidiSQL for the purpose of this post. The connection information I use in this post comes from this GitHub issue.

Connecting with HeidiSQL

HeidiSQL is my favorite query browser for MySQL and MariaDB databases. I like the layout and the interface is nice and clean.

When you first open HeidiSQL you will see the interface for creating a new Database connection.HeidiSQL Session Manager
Choose whichever name you want to help you remember what this connection is for. I’ve named mine “Chassis” because it’s my connection to the database Chassis.io setup. You’ll also want to set the following settings:

  • Network type: MySQL (SSH tunnel)
  • Hostname / IP: localhost
  • User: wordpress
  • Password: vagrantpassword
  • Port: 3306

That’s it for the basic settings. Now for the SSH Tunnel settings.

HeidiSQL – Plink.exe and Private Key

HeidiSQL uses a utility called “plink.exe” for it’s SSH capabilities. plink.exe is made by the same author who wrote PuTTY (which I’m sure you’ve heard of). If you haven’t got plink.exe downloaded you can find the latest exe on this page. You’ll want to grab both plink.exe and puttygen.exe. I stuck both utilities inside a “PuTTY” folder in my Program Files (x86) directory. You can stick them wherever you want to.

Ok, before we setup the SSH Tunnel settings we are going to want to setup the Private key file that plink.exe will use to communicate with your Virtual Machine. PuTTY utilities use specific private key files called .ppk files. We are going to want to convert the Vagrant provided private key file to a .ppk file for use by plink.exe. Luckily, the puttygen.exe utility you downloaded makes this conversion simple.

Launch puttygen.exe. This will launch the “PuTTY Key Generator”. Load in the Vagrant provided private key file by using File > Load Private Key. Navigate to the location of your Vagrant private key file. Mine was located in C:\projects\chassis\.vagrant\machines\default\virtualbox. Your location may be different depending on where your Chassis project is. Find the “private_key” file and open that. The PuTTY Key Generator will take care of loading the key in for you. You should see a “Successfully imported foreign key …” message. Now click “Save private key”, choose a name for it, and save it. I just saved it exactly where the other private_key was.

PuTTY Key Generator
Location of the “Save private key” button

Woot! Now we can fill out the HeidiSQL SSH tunnel settings. Remember where you saved that .ppk file because you’ll need it for this next step.

HeidiSQL – SSH Tunnel Settings

Click on the tab for “SSH tunnel” to access the HeidiSQL Session Manager SSH Tunnel settings.

HeidiSQL SSH Tunnel Settings
HeidiSQL SSH Tunnel Settings

Alright, let’s plug in the values!

  • plink.exe location: Insert the path to your plink.exe utility.
  • SSH host + port: localhost and 2222
  • Username: vagrant
  • Password: just leave this blank
  • plink.exe timeout: default is fine
  • Private key file: Path to the .ppk file we created above
  • Local port: 3307 is fine

Now we come to the moment of truth. Push the “Save” button on the HeidiSQL session manager to save your changes. Now push the “Open” button and HeidiSQL should connect to your Vagrant hosted WordPress database. Woot!

Chassis.io Timeout Issue

TL:DR -> Try enabling Virtualization in your BIOS.

I’m trying out http://chassis.io as a way to easily setup a WordPress development environment on Windows. It’s actually quite easy and everything works almost exactly like the Chassis Get Started guide describes.

However, I ran into a timeout issue when attempting to boot up the Virtual Machine using vagrant up. On first run the process installed necessary dependencies and wired most things up. However, it hung for a considerable amount of time when booting up the virtual machine. Eventually it told me that it had timed out. It didn’t start the virtual machine.

VT-x/AMD-V hardware acceleration is not available on your system

Hrmm… I wonder why it’s timing out. Chassis.io uses Vagrant and VirtualBox. So I spun up VirtualBox to see if I could manually start the VM myself. As it turns out, I could not. VirtualBox threw up the following error:

VirtualBox - Error
VT-x/AMD-V hardware acceleration is not available on your system. Your 64-bit guest will fail to detect a 64-bit CPU and will not be able to boot.

Well, that’s nice… (Hint: it’s not nice).

First Try: Disabling Hyper-V

I did some searching. I found a number of posts that indicated the solution was to disable Hyper-V. It sounds like this works for a lot of people. Scott Hanselman actually wrote up a post about how to “Switch easily between VirtualBox and Hyper-V with a BCDEdit boot Entry in Windows 8.1“. I tried this approach. It did not work for me (you can remove a bcdEdit entry using bcdedit /delete {ENTRYGUID} btw).

Second Try: Enabling Virtualization via BIOS

During my search I stumbled upon this SuperUser answer. The answer indicated that, depending on your system, Virtualization could be enabled via the BIOS.

In my case, enabling Virtualization via BIOS involved booting to the UEFI Firmware Settings. I’ve outlined the steps below.

  1. Hold down the Shift key while you click Restart. This will cause your computer to bring up a special menu.

    Hold Down Shift and Restart
    Hold down “SHIFT” and click Restart
  2. Next you need to navigate the option screens to find “UEFI Firmware Settings”
    1. Select “Troubleshoot”
    2. Select “Advanced options”
    3. Select “UEFI Firmware Settings”
    4. Restart

    Steps to UEFI Firmware Settings
    Steps to UEFI Firmware Settings
  3. This will reboot you into your PC’s UEFI settings which looks a lot like a typical BIOS menu.
  4. Enable Virtualization
    Your system may be different. My system had a “Virtualization” setting located under the “Security” tab. Once I located the “Virtualization” setting I noticed that “Intel (R) Virtualization Technology” was indeed set to Disabled. I enabled it, saved the setting, and restarted my machine.

    Enable Virtualization via BIOS
    Enable Virtualization via BIOS

After enabling “Virtualization” I tried to start the VirtualBox VM one more time. BOOM. It worked. I ran vagrant up via a ConEmu console and… success.

In Conclusion

Chassis.io is a pretty sweet project. If your system is setup correctly then Chassis.io “just works”. In my case my system needed “Virtualization” enabled via a UEFI Firmware Setting.

Have a stupendous day! 🙂

Entity Framework: Update-Database Migrates the Wrong DB

Recently I made the switch from using Visual Studio 2015 to using Visual Studio 2017. For the most part the transition was easy. However, I ran into an issue with Entity Framework updating the wrong database. I’m posting the solution here so I don’t forget 🙂

TL:DR
If you are experiencing issues with Entity Framework then check that your startup project is the correct one.

EF Update-Database Is Not Working

My current setup involves using a local SQL Server Express database. I check the database via SQL Server Management Studio (ManStu) when I run Update-Database to ensure my changes take place. When I run Update-Database from Visual Studio 2015 the changes are reflected in the database. When I run Update-Database from Visual Studio 2017 the changes are not reflected in the database.

Why does Update-Database work correctly in Visual Studio 2015 but not correctly in Visual Studio 2017? Why does Visual Studio 2017 tell me that the changes were applied successfully?

I decided to take a look at the output of Update-Database -Verbose to see if it yielded any helpful information. There I saw:

Target database is: 'MySpecialDB' (DataSource: (localdb)\v11.0, Provider: System.Data.SqlClient, Origin: Convention).

Entity Framework was using (localdb) and not the SQL Server Express database I setup in the app.config. That explains why the changes were applied successfully. However, why was Entity Framework using the wrong database?

The Not So Thrilling Simple Solution

I pursued a number of different routes looking for the solution to this issue. In the end the solution is so simple. The wrong startup project was selected. That’s it. In Visual Studio 2015 I was using a different startup project. In Visual Studio 2017 I never setup a startup project and so one was selected automatically.

As it turns out Entity Framework pulls meaningful information (like database connection information) out of the startup project. The fact that I had the wrong startup project selected in Visual Studio 2017 was the reason why my Entity Framework Update-Database commands were not working the way I expected.

So, lesson learned, if you are experiencing issues with Entity Framework then check your startup project. It could be that you have the wrong startup project selected 🙂

Fixing UI-SREF-ACTIVE – Specifying a Default Abstract State

I’ve recently begun working with Angular and by extension Angular UI-Router. The fact that you are reading this means that you likely have as well. That said, let’s all pause for a moment and cry together. I know it’s hard. You will get through it. It will be ok. We can do this.

Basic ui-sref-active Usage

One of the things that UI-Router gives you is the ability to add a class to an element if that elements state is currently active. This is done via the ui-sref-active directive.


<ul class="nav navbar-nav" ng-controller="navController">
   
   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="home">Home</a></li>

   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="notHome">Not Home</a></li>

</ul>

So above we have some basic navigation with two states. The home state and the notHome state. The ui-sref-active directive takes care of adding the active class to whichever li contains the state that is currently active.

The Problem with Abstract States

The problem is that the ui-sref-active directive does not work correctly (or as we expect) when the parent state is an abstract state.

Let’s say you want to expand your “home” state a bit. Maybe you want to add a “dashboard” state and from there link to a “messages” state. You might set up your $stateProvider a bit like this.

$stateProvider
	.state("home",
	{
		abstract: true,
		url: "/home"
	})
	.state("home.dashboard", {
		url: "/dashboard",
		views: {
			"content@": {
				templateUrl: "app/home/dashboard.html",
				controller: "DashboardController"
			}
		}
	})
   .state("home.messages", {
		url: "/messages",
		views: {
			"content@": {
				templateUrl: "app/home/messages.html",
				controller: "MessagesController"
			}
		}
	});

You’ll see we’ve setup home as an abstract view. By default we want to land on our home.dashboard state. We also want ui-sref-active to set the active class on our “Home” link regardless of which child state we are on.


<ul class="nav navbar-nav" ng-controller="navController">
   
   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="home.dashboard">Home</a></li>


   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="notHome">Not Home</a></li>

</ul>

You will notice that in the code above we are now using ui-sref to link to  home.dashboard. This is where the problem with ui-sref-active crops up, it will only show the active class if the state is home.dashboard. We want the active class to appear on any child of the “home” state. As it is, the ui-sref-active directive will not detect home.messages as active. So the question becomes, “how can we fix ui-sref-active so that it detects a parent abstract state”?

Fixing ui-sref-active

The answer comes from Tom Grant in the form of a comment on a GitHub issue.

Tom informs us that there is an undocumented built in solution to this ui-sref-active problem. The solution, he says, is to “use an object (like with ng-class) and it will work”.

Code examples that Tom provides:

<!-- This will not work -->
<li ui-sref-active="active">
   <a ui-sref="admin.users">Administration Panel</a>
</li>

<!-- This will work -->
<li ui-sref-active="{ 'active': 'admin' }">
   <a ui-sref="admin.users">Administration Panel</a>
</li>

That’s it. Now we can link to children of abstract ui-router states and ui-sref-active will behave the way we expect it should.

Ubuntu Server not completing upgrade

It’s been about seven months since I setup a Wireless GitLab server. Since then I’ve figured out how to list updatable packages on Ubuntu Server. I’ve also performed several updates using sudo apt-get update && sudo apt-get upgrade.

gzip: stdout: No space left on device

Today I ran into a new problem. Upon trying to perform an update I was presented with a peculiar error. It said gzip: stdout: No space left on device and it told me to run apt-get -f install to fix things up. So… that’s what I tried doing. I tried running the apt-get -f install command but to no avail. The command would not complete successfully.

This is about the time when I start getting really annoyed with Linux and the command line and all the things associated with configuring things manually like do I really need to download the entirety of the Linux MAN files inside my HEAD? DO I NEED TO DO THAT? GAHasldkjsadljfsadfsdsdf!!!!

Calm yourself.

The /boot partition is 100% full

Ok, so it turns out that the apt-get process can fail if the /boot partition becomes 100% full. There were a number of suggestions online that indicated you needed to clean out the /boot partition by removing old linux-images that you don’t need anymore. Many of these suggestions involved using sudo apt-get remove [package-name] or using sudo apt-get autoremove which are both completely valid options… IF APT-GET WERE WORKING. But apt-get is not working, that’s the problem.

So… I Googled a lot and dug through a lot of forums. Finally I stumbled on this uber helpful answer on askUbuntu. I’ll go ahead and paraphrase the answer below so that I can easily find it again. Yes. This is all about me.

Cleaning up the /boot partition

In the case where your /boot partition becomes totally full you can use these steps to clean it up. (From flickerfly on AskUbuntu).

  1. Run the following command to get a list of the linux-image files that you don’t need anymore.
    sudo dpkg --list 'linux-image*'|awk '{ if ($1=="ii") print $2}'|grep -v `uname -r`
    
  2. Create a command to remove the folders you don’t need anymore. You can do that with a command like this (where brace-expansion is used to save keystrokes). Use the output from the command above to build your command.
    EXAMPLE
    sudo rm -rf /boot/*-3.2.0-{23,45,49,51,52,53,54,55}-*
    
  3. Now that apt-get has space to work with you can actually run sudo apt-get -f install to clean things up.
  4. Use Purge to manually resolve issues with “Internal Errors” (if you get any internal errors).
    EXAMPLE
    sudo apt-get purge linux-image-3.2.0-56-generic
    
  5. Run `sudo apt-get autoremove ` to clean up anything orphaned by the manual clean.
  6. Now you can finally proceed with those updates you were wanting to do.
    sudo apt-get update && sudo apt-get upgrade
    

Party?

We can party now I think.

List Updatable/Upgradable Packages in Ubuntu Server

A little while ago I setup a GitLab box using Ubuntu Server. When I log in to the server it shows me a short message about available updates. The message looks something like this:

Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

7 packages can be updated.
0 updates are security updates.

I know that I can update these packages by running `sudo apt-get update && sudo apt-get upgrade` however, I’d like to know what I’m updating before I do it. In the past you could accomplish this by performing a “dry-run” of the command. This essentially showed you the output of the command without actually performing any updates. That worked alright – but honestly, I just want a list of the packages – not the entire output of the command.

Listing the Upgradable Packages

I stumbled upon this answer (made just a few days ago) by AskUbuntu user “doru“. Turns out that getting a list of updatable/upgradable packages is pretty easy. You just run this:

sudo apt list --upgradable

The list --upgradable command will list out all the packages that you can update, what their current versions are, and what the new version is. Boom! That’s exactly what I wanted.

Tell Git to ignore changes to a versioned file

There are times when you do not want git to track changes to a versioned file. In these cases you can update the git index so that it assumes the file is unchanged. This will only affect your local repo and it will take affect until you tell git otherwise.

Tell Git to Not Track Changes

You can tell git to not track changes by using

git update-index --assume-unchanged <file>

Tell Git to Track Changes (Again)

And when you want git to track changes again you can use

git update-index --no-assume-unchanged <file>

GitLab on Ubuntu Server with WiFi

Over the weekend I spent some time setting up GitLab on Ubuntu Server using a WiFi card. For those of you who do not know what GitLab is, check it out. I stumbled upon GitLab several years ago when I was looking for a self-hosted GitHub alternative. Since then, GitLab has greatly improved, and setting it up is fairly easy.

Setting up Ubuntu Server

First, you are going to want to obtain the Ubuntu Server install. You can download this from the Ubuntu Website.

The second step I took was to find an old desktop I wasn’t using anymore. This is going to be my server. I installed a PCI-E WiFi card in the sucker, because, honestly I’m too lazy to run the network cable.

Note: I tried to setup the server multiple times using just the WiFi card. I wouldn’t recommend it as it was a very frustrating process. I’d highly recommend hooking your new server up via an Ethernet cable at least until you setup the WiFi. It’s far easier and saves a ton of time.

After I hooked up my server with the Ethernet cable I booted to the Ubuntu Install disc and began the installation process. The process itself is really quite simple. There are a few questions you have to answer but the whole thing should be over in less than 30 minutes. I just overwrote everything on the hard disk. After it’s done installing it’s going to ask you to remove the installation media. At that point it should reboot, load up, and show the login screen.

Note: Ubuntu Server does not come with a GUI. Everything is done via the command line. You can install a GUI if you want, but there isn’t a GUI packaged in.

Now that I had Ubuntu Server installed I went ahead and logged in. The first thing it showed me was that there were some updates to be installed. So I ran the following commands to update the system:

sudo apt-get update && sudo apt-get upgrade

Getting WiFi up and Running on Ubuntu Server

Now that the system was updated I wanted to get the WiFi working. In order to get the WiFi working I used nmcli. nmcli is a command line tool that comes with the Network Manager package. Some people might not like using this tool because I believe it installs some GUI dependencies. Honestly, nmcli was the easiest method I found to get the WiFi working, so I don’t really care about the small amount of dependencies that the Network Manager package comes with.

sudo apt install network-manager

Alright. I had the network-manager package installed. Now to connect to my WiFi network.

I read through the “man” page for nmcli. It looks like I can get a list of wifi access points in the area by running the following command.

nmcli device wifi list

Yes! That actually gave me a list of WiFi access points in my area. I saw my home network listed. I was so happy to see this because it meant I didn’t need to configure anything else. The Ubuntu server had recognized my wireless card and the card was working. That made me so happy… 🙂

Next step would be to actually connect to my WiFi access point. According to the nmcli man page I can connect using nmcli device wifi connect. My access point requires a key, and it looks like nmcli supports connecting to an access point with a key… so this is a good thing.

nmcli device wifi connect MyAccessPoint password 123456789ACB

Boom! I ran that sucker and it actually worked! I had been struggling and struggling with this before – nmcli is like my new favorite thing ever. EVER.

At this point I rebooted the server and disconnected the Ethernet cable. I wanted to see if the server would automatically connect to the WiFi access point on boot. It seemed to take a long time to start. After it started up I logged in. I tried to ping google.com. No dice. I waited a few moments and tried again… it worked!

I made sure OpenSSH was installed on the server so that I could manage it from another computer.

My WiFi was now working on the Ubuntu Server. It was connected to my home network, and it automatically connected after the server was turned on.

Setting Up GitLab on Ubuntu Server

Now that I had the WiFi connected I wanted to get GitLab all setup. Luckily, the folks at GitLab have made this incredibly easy. They have a great guide setup here. There are really only a few commands you need to run and then you are good to go. Let’s go ahead and list those commands really quick.

Install the Dependencies
sudo apt-get install curl openssh-server ca-certificates postfix

These are things that GitLab needs in order to run successfully.

Add the GitLab package server and install the package.
curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash
sudo apt-get install gitlab-ce

I used the above command. However, GitLab mentions an alternative if you aren’t comfortable with a piped script. You can find the alternative on their guide page.

Configure and Start GitLab
sudo gitlab-ctl reconfigure

The people behind GitLab have really made it incredibly easy to get this up and running. Now you just need to login to your new GitLab server. When you first visit the page you will be asked to create a new password. This password can be used in conjunction with the “root” username to login to the system.

Finishing Touches

That should be it. Your GitLab server is setup and working. You’ve setup the WiFi card so the server is connected to your network. You’ve got OpenSSH installed so you can manage the server from another machine. You’ve installed GitLab so you can host your own internal Git repositories (as well as collaborate with others on your team etc…).

The last things I would do:

  1. Change your GitLab username from “root” to something else. You can do this within the GitLab interface.
  2. Setup your router so that it always assigns a certain IP address to your server. This way you don’t have to worry about static IP addresses on the Ubuntu Server itself.
  3. Update your internal DNS so that you can refer to your GitLab server by an actual domain name. I set mine up as “gitlab.jeremysawesome.com”.
  4. Download PuTTY on your Windows machine so that you can remote manage your server.
    1. Optionally hook this up with ConEmu 🙂
    2. Optionally update with the Solarized theme for PuTTY.
  5. Set your server up somewhere inconspicuous. Hey, you’ve got a WiFi server. Throw it somewhere out of the way.

Alright – that’s it. This post ended up being a bit longer than I thought, however I’m glad I’ve got it documented it. (Even if there wasn’t much to document).

Why I no longer contribute to StackOverflow – Michael T. Richter

I ran into this post by Michael T. Richter a while ago and found it to be an interesting read. I certainly recall the regex question he’s talking about and I remember stumbling upon that question myself back in the day. In the past StackOverflow did seem more like a community of developers who wanted to have fun and help eachother out. The dude makes some good points in his (now old and deleted) post.

However it has been archived and so I link to the archive here, mainly for my own future reference.

Why I no longer contribute to StackOverflow – Michael T. Richter

GIT CLI SSH PassPhrase

I use the GIT command line interface a lot. It helps me keep my Git repositories looking sharp and clean. Interactive rebase auto-squashing with posh-git+ConEmu ftw!

However, from time to time I will notice that the GIT cli is asking me for my SSH RSA passphrase more often than I’d like. Sometimes it even asks on every pull. That’s annoying.

It is possible, however, to only enter your passphrase once per session. Setting this up should be as simple as doing the following:

  1. Add the “bin/” folder of your GIT install to your $PATH. This will allow you to reference ssh-agent in your powershell environment.
  2. From your Powershell environment run
    ssh-agent
  3. Now run
    ssh-add

Excellent! That should be it. Now you should be able to push, pull all you want without having to insert your passphrase more than once per session.