Entity Framework: Update-Database Migrates the Wrong DB

Recently I made the switch from using Visual Studio 2015 to using Visual Studio 2017. For the most part the transition was easy. However, I ran into an issue with Entity Framework updating the wrong database. I’m posting the solution here so I don’t forget ūüôā

TL:DR
If you are experiencing issues with Entity Framework then check that your startup project is the correct one.

EF Update-Database Is Not Working

My current setup involves using a local SQL Server Express database. I check the database via SQL Server Management Studio (ManStu) when I run Update-Database to ensure my changes take place. When I run Update-Database from Visual Studio 2015 the changes are reflected in the database. When I run Update-Database from Visual Studio 2017 the changes are not reflected in the database.

Why does Update-Database work correctly in Visual Studio 2015 but not correctly in Visual Studio 2017? Why does Visual Studio 2017 tell me that the changes were applied successfully?

I decided to take a look at the output of Update-Database -Verbose to see if it yielded any helpful information. There I saw:

Target database is: 'MySpecialDB' (DataSource: (localdb)\v11.0, Provider: System.Data.SqlClient, Origin: Convention).

Entity Framework was using (localdb) and not the SQL Server Express database I setup in the app.config. That explains why the changes were applied successfully. However, why was Entity Framework using the wrong database?

The Not So Thrilling Simple Solution

I pursued a number of different routes looking for the solution to this issue. In the end the solution is so simple. The wrong startup project was selected. That’s it. In Visual Studio 2015 I was using a different startup project. In Visual Studio 2017 I never setup a startup project and so one was selected automatically.

As it turns out Entity Framework pulls meaningful information (like database connection information) out of the startup project. The fact that I had the wrong startup project selected in Visual Studio 2017 was the reason why my Entity Framework Update-Database commands were not working the way I expected.

So, lesson learned, if you are experiencing issues with Entity Framework then check your startup project. It could be that you have the wrong startup project selected ūüôā

Fixing UI-SREF-ACTIVE – Specifying a Default Abstract State

I’ve recently begun working with Angular and by extension Angular UI-Router. The fact that you are reading this means that you likely have as well. That said, let’s all pause for a moment and cry together. I know it’s hard. You will get through it. It will be ok. We can do this.

Basic ui-sref-active Usage

One of the things that UI-Router gives you is the ability to add a class to an element if that elements state is currently active. This is done via the ui-sref-active directive.


<ul class="nav navbar-nav" ng-controller="navController">
   
   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="home">Home</a></li>

   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="notHome">Not Home</a></li>

</ul>

So above we have some basic navigation with two states. The home state and the notHome state. The ui-sref-active directive takes care of adding the active class to whichever li contains the state that is currently active.

The Problem with Abstract States

The problem is that the ui-sref-active directive does not work correctly (or as we expect) when the parent state is an abstract state.

Let’s say you want to expand your “home” state a bit. Maybe you want to add a “dashboard” state and from there link to a “messages” state. You might set up your $stateProvider a bit like this.

$stateProvider
	.state("home",
	{
		abstract: true,
		url: "/home"
	})
	.state("home.dashboard", {
		url: "/dashboard",
		views: {
			"content@": {
				templateUrl: "app/home/dashboard.html",
				controller: "DashboardController"
			}
		}
	})
   .state("home.messages", {
		url: "/messages",
		views: {
			"content@": {
				templateUrl: "app/home/messages.html",
				controller: "MessagesController"
			}
		}
	});

You’ll see we’ve setup home as an abstract view. By default we want to land on our home.dashboard state. We also want ui-sref-active¬†to set the active class on our “Home” link regardless of which child state we are on.


<ul class="nav navbar-nav" ng-controller="navController">
   
   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="home.dashboard">Home</a></li>


   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="notHome">Not Home</a></li>

</ul>

You will notice that in the code above we are now using ui-sref¬†to link to ¬†home.dashboard. This is where the problem with ui-sref-active crops up, it will only show the active class if the state is home.dashboard. We want the active class to appear on any child of the “home” state. As it is, the ui-sref-active directive will not detect home.messages¬†as active. So the question becomes, “how can we fix ui-sref-active¬†so that it detects a parent abstract state”?

Fixing ui-sref-active

The answer comes from Tom Grant in the form of a comment on a GitHub issue.

Tom informs us that there is an undocumented built in solution to this ui-sref-active¬†problem. The solution, he says, is to “use an object (like with ng-class) and it will work”.

Code examples that Tom provides:

<!-- This will not work -->
<li ui-sref-active="active">
   <a ui-sref="admin.users">Administration Panel</a>
</li>

<!-- This will work -->
<li ui-sref-active="{ 'active': 'admin' }">
   <a ui-sref="admin.users">Administration Panel</a>
</li>

That’s it. Now we can link to children of abstract ui-router states and ui-sref-active¬†will behave the way we expect it should.

Ubuntu Server not completing upgrade

It’s been about seven months since I setup a Wireless GitLab server. Since then I’ve figured out how to list updatable packages on Ubuntu Server. I’ve also performed several updates using sudo apt-get update && sudo apt-get upgrade.

gzip: stdout: No space left on device

Today I ran into a new problem. Upon trying to perform an update I was presented with a peculiar error. It said gzip: stdout: No space left on device¬†and it told me to run apt-get -f install¬†to fix things up. So… that’s what I tried doing. I tried running the apt-get -f install command but to no avail. The command would not complete successfully.

This is about the time when I start getting really annoyed with Linux and the command line and all the things associated with configuring things manually like do I really need to download the entirety of the Linux MAN files inside my HEAD? DO I NEED TO DO THAT? GAHasldkjsadljfsadfsdsdf!!!!

Calm yourself.

The /boot partition is 100% full

Ok, so it turns out that the apt-get process can fail if the /boot¬†partition becomes 100% full. There were a number of suggestions online that indicated you needed to clean out the /boot partition by removing old linux-images that you don’t need anymore. Many of these suggestions involved using sudo apt-get remove [package-name] or using sudo apt-get autoremove which are both completely valid options… IF APT-GET WERE WORKING. But apt-get is not working, that’s the problem.

So… I Googled a lot and dug through a lot of forums. Finally I stumbled on this uber helpful answer on askUbuntu. I’ll go ahead and paraphrase the answer below so that I can easily find it again. Yes. This is all about me.

Cleaning up the /boot partition

In the case where your /boot partition becomes totally full you can use these steps to clean it up. (From flickerfly on AskUbuntu).

  1. Run the following command to get a list of the linux-image files that you don’t need anymore.
    sudo dpkg --list 'linux-image*'|awk '{ if ($1=="ii") print $2}'|grep -v `uname -r`
    
  2. Create a command to remove the folders you don’t need anymore. You can do that with a command like this (where brace-expansion is used to save keystrokes). Use the output from the command above to build your command.
    EXAMPLE
    sudo rm -rf /boot/*-3.2.0-{23,45,49,51,52,53,54,55}-*
    
  3. Now that apt-get has space to work with you can actually run sudo apt-get -f install to clean things up.
  4. Use Purge to manually resolve issues with “Internal Errors” (if you get any internal errors).
    EXAMPLE
    sudo apt-get purge linux-image-3.2.0-56-generic
    
  5. Run `sudo apt-get autoremove ` to clean up anything orphaned by the manual clean.
  6. Now you can finally proceed with those updates you were wanting to do.
    sudo apt-get update && sudo apt-get upgrade
    

Party?

We can party now I think.

List Updatable/Upgradable Packages in Ubuntu Server

A little while ago I setup a GitLab box using Ubuntu Server. When I log in to the server it shows me a short message about available updates. The message looks something like this:

Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

7 packages can be updated.
0 updates are security updates.

I know that I can update these packages by running `sudo apt-get update && sudo apt-get upgrade` however, I’d like to know what I’m updating before I do it. In the past you could accomplish this by performing a “dry-run” of the command. This essentially showed you the output of the command without actually performing any updates. That worked alright – but honestly, I just want a list of the packages – not the entire output of the command.

Listing the Upgradable Packages

I stumbled upon this answer (made just a few days ago) by AskUbuntu user “doru“. Turns out that getting a list of updatable/upgradable packages is pretty easy. You just run this:

sudo apt list --upgradable

The list --upgradable command¬†will list out all the packages that you can update, what their current versions are, and what the new version is. Boom! That’s exactly what I wanted.

Tell Git to ignore changes to a versioned file

There are times when you do not want git to track changes to a versioned file. In these cases you can update the git index so that it assumes the file is unchanged. This will only affect your local repo and it will take affect until you tell git otherwise.

Tell Git to Not Track Changes

You can tell git to not track changes by using

git update-index --assume-unchanged <file>

Tell Git to Track Changes (Again)

And when you want git to track changes again you can use

git update-index --no-assume-unchanged <file>

GitLab on Ubuntu Server with WiFi

Over the weekend I spent some time setting up GitLab on Ubuntu Server using a WiFi card. For those of you who do not know what GitLab is, check it out. I stumbled upon GitLab several years ago when I was looking for a self-hosted GitHub alternative. Since then, GitLab has greatly improved, and setting it up is fairly easy.

Setting up Ubuntu Server

First, you are going to want to obtain the Ubuntu Server install. You can download this from the Ubuntu Website.

The second step I took was to find an old desktop I wasn’t using anymore. This is going to be my server. I installed a PCI-E WiFi card in the sucker, because, honestly I’m too lazy to run the network cable.

Note: I tried to setup the server multiple times using just the WiFi card. I wouldn’t recommend it as it was a very frustrating process. I’d highly recommend hooking your new server up via an Ethernet cable at least until you setup the WiFi. It’s far easier and saves a ton of time.

After I hooked up my server with the Ethernet cable I booted to the Ubuntu Install disc and began the installation process. The process itself is really quite simple. There are a few questions you have to answer but the whole thing should be over in less than 30 minutes. I just overwrote everything on the hard disk. After it’s done installing it’s going to ask you to remove the installation media. At that point it should reboot, load up, and show the login screen.

Note: Ubuntu Server does not come with a GUI. Everything is done via the command line. You can install a GUI if you want, but there isn’t a GUI¬†packaged in.

Now that I had Ubuntu Server installed I went ahead and logged in. The first thing it showed me was that there were some updates to be installed. So I ran the following commands to update the system:

sudo apt-get update && sudo apt-get upgrade

Getting WiFi up and Running on Ubuntu Server

Now that the system was updated I wanted to get the WiFi working. In order to get the WiFi working I used nmcli. nmcli is a command line tool that comes with the Network Manager package. Some people might not like using this tool because I believe it installs some GUI dependencies. Honestly, nmcli was the easiest method I found to get the WiFi working, so I don’t really care about the small amount of dependencies that the Network Manager package comes with.

sudo apt install network-manager

Alright. I had the network-manager package installed. Now to connect to my WiFi network.

I read through¬†the “man” page for nmcli. It looks like I can get a list of wifi access points in the area by running the following command.

nmcli device wifi list

Yes! That actually gave me a list of WiFi access points in my area. I saw my home network listed. I was so happy to see this because it meant I didn’t need to configure anything else. The Ubuntu server had recognized my wireless card and the card was working. That made me so happy… ūüôā

Next step would be to actually connect to my WiFi access point. According to the nmcli man page I can connect using nmcli device wifi connect. My access point requires a key, and it looks like nmcli supports connecting to an access point with a key… so this is a good thing.

nmcli device wifi connect MyAccessPoint password 123456789ACB

Boom! I ran that sucker and it actually worked! I had been struggling and struggling with this before – nmcli is like my new favorite thing ever. EVER.

At this point I rebooted the server and disconnected the Ethernet cable. I wanted to see if the server would automatically connect to the WiFi access point on boot. It seemed to take a long time to start. After it started up I logged in. I tried to ping google.com. No dice. I waited a few moments and tried again… it worked!

I made sure OpenSSH was installed on the server so that I could manage it from another computer.

My WiFi was now working on the Ubuntu Server. It was connected to my home network, and it automatically connected after the server was turned on.

Setting Up GitLab on Ubuntu Server

Now that I had the WiFi connected I wanted to get GitLab all setup. Luckily, the folks at GitLab have made this incredibly easy. They have a great guide setup here. There are really only a few commands you need to run and then you are good to go.¬†Let’s go ahead and list those commands really quick.

Install the Dependencies
sudo apt-get install curl openssh-server ca-certificates postfix

These are things that GitLab needs in order to run successfully.

Add the GitLab package server and install the package.
curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash
sudo apt-get install gitlab-ce

I used the above command. However, GitLab mentions an alternative if you aren’t comfortable with a piped script. You can find the alternative on their guide page.

Configure and Start GitLab
sudo gitlab-ctl reconfigure

The people behind GitLab have really made it incredibly easy to get this up and running. Now you just need to login to your new GitLab server. When you first visit the page you will be asked to create a new password. This password can be used in conjunction with the “root” username to login to the system.

Finishing Touches

That should be it. Your GitLab server is setup and working. You’ve setup the WiFi card so the server is connected to your network. You’ve got OpenSSH installed so you can manage the server from another machine. You’ve installed GitLab so you can host your own internal Git repositories (as well as collaborate with others on your team etc…).

The last things I would do:

  1. Change your GitLab username from “root” to something else. You can do this within the GitLab interface.
  2. Setup your router so that it always assigns a certain IP address to your server. This way you don’t have to worry about static IP addresses on the Ubuntu Server itself.
  3. Update your internal DNS so that you can refer to your GitLab server by an actual domain name. I set mine up as “gitlab.jeremysawesome.com”.
  4. Download PuTTY on your Windows machine so that you can remote manage your server.
    1. Optionally hook this up with ConEmu ūüôā
    2. Optionally update with the Solarized theme for PuTTY.
  5. Set your server up somewhere inconspicuous. Hey, you’ve got a WiFi server. Throw it somewhere out of the way.

Alright – that’s it. This post ended up being a bit longer than I thought, however I’m glad I’ve got it documented it. (Even if there wasn’t much to document).

Why I no longer contribute to StackOverflow – Michael T. Richter

I ran into this post by Michael T. Richter a while ago and found it to be an interesting read. I certainly recall the regex question he’s talking about and I remember stumbling upon that question myself back in the day. In the past StackOverflow did seem more like a community of developers who wanted to have fun and help eachother out. The dude makes some good points in his (now old and deleted) post.

However it has been archived and so I link to the archive here, mainly for my own future reference.

Why I no longer contribute to StackOverflow – Michael T. Richter

GIT CLI SSH PassPhrase

I use the GIT command line interface a lot. It helps me keep my Git repositories looking sharp and clean. Interactive rebase auto-squashing with posh-git+ConEmu ftw!

However, from time to time I will notice that the GIT cli is asking me for my SSH RSA passphrase more often than I’d like. Sometimes it even asks on every pull. That’s annoying.

It is possible, however, to only enter your passphrase once per session. Setting this up should be as simple as doing the following:

  1. Add the “bin/” folder¬†of your GIT install to your $PATH. This will allow you to reference ssh-agent in your powershell environment.
  2. From your Powershell environment run
    ssh-agent
  3. Now run
    ssh-add

Excellent! That should be it. Now you should be able to push, pull all you want without having to insert your passphrase more than once per session.

Password Protect a WordPress Subdirectory with .htaccess

There are questions all over the internet regarding how to password protect a sub-directory when you are using WordPress.

I just spent a long time fighting a frustrating battle with this as well. So I’m documenting the resolution here for my (and anyone’s) benefit.

 In short

  1. WordPress does not mess with requests to actual directories or files.
  2. If WordPress is messing with your request then you aren’t requesting an actual directory or file.
  3. It’s likely your Error codes aren’t setup to return actual files.
  4. Make sure your .htaccess file isn’t generating 500 errors (i.e. ensure the path to your .htpasswd file is correct).

Problem

I’ve added a .htaccess and .htpasswd file but all I see is a WordPress 404 page. I can’t stop crying because it’s not working and my brain hurts.

Yep. That happens. WordPress comes with the following .htaccess file by default:

# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress

Let’s break this down. ¬†First we are checking if the mod_rewrite module is even installed. If it is then we are turning the RewriteEngine on. That’s all great. We wouldn’t want to use the engine if it didn’t exist… right?

RewriteBase / –¬†This sets the base of every subsequent Rule and Condition to the root `/`. ¬†This way we don’t have to include the root directory at the beginning of any of our rules.

RewriteRule ^index\.php$ - [L] – This rewrite rule checks to see if we are on the index.php page already. The dash in the rule means do nothing.¬†So… if we are already on index.php don’t do anything. The [L] option means that we should stop processing rules now. Don’t do anything else, we’ve got what we wanted. Quite literally this is the [L]ast rule that should be processed.

RewriteCond %{REQUEST_FILENAME} !-f – This condition makes sure that if the current request is hitting an actual existing file then we should do nothing. So WordPress won’t mess with your requests if you try to link to an actual file.

RewriteCond %{REQUEST_FILENAME} !-d – This condition makes sure that if the current request is hitting an actual existing directory that we should do nothing. So WordPress won’t mess with your requests if you try to link to an actual directory.

RewriteRule . /index.php [L] – Finally, if our request passed the above two conditions (it’s not an actual file and not an actual directory) then map the request to index.php. Now the request is mapped and WordPress can do its thing!

That’s Great But…

I know what you are thinking. You are thinking:

If what you are saying is true, then I shouldn’t be seeing a 404 page. My password protected directory actually exists!

Yes. You are correct, your directory does exist.

Solution

When you password protect a directory with .htaccess you are telling the server to return a certain response code. The 401 response code meaning the user is unauthorized, to be precise. When the browser received this response code it triggers a username and password prompt. However, and here is the problem, the browser is never receiving the response code.

Why is the browser not receiving the response code?

Good question. If you remember the WordPress .htaccess checks if the requested url points an actual file or directory. It only rewrites you to the index.php file if you aren’t actually requesting a file or directory. When you throw the 401 response code you aren’t actually requesting a file or directory. You are essentially requesting nothing (because you are unauthorized). So the WordPress .htaccess file is behaving correctly – it’s rewriting you to the index.php page and giving you a 404 (because more than likely your password protected directory does not match a permalink on your WordPress blog).

So… if WordPress is making sure that you actually requested a file then… you need to make sure that you are actually getting a file! You can do this by adding the following line to the top of your WordPress .htaccess file:

ErrorDocument 401 default

What you are doing is telling the server to return the default 401 file when it encounters a 401 response code. Once you are returning an actual file WordPress won’t try to grab your request.

Ok. I added that and I’m still having issues. What gives?

If you are like me, then the 401 response code fix wasn’t enough. You are still having the same issue and by now you are wanting to… oh gosh I can’t even think of anything to describe this type of pain.

Let’s look at our .htaccess file we are using to password protect our sub-directory. If you are anything like me your file might’ve looked something like this.

AuthType Basic
AuthName "Password Protected Area"
AuthUserFile /public_html/jeremysawesome.com/mySecretDirectory/.htpasswd
Require valid-user

This looks perfectly valid to me. However, it turns out this file is generating Internal Server Errors! ¬†(I know because I added a ErrorDocument 500 default line to my WordPress .htaccess file just for kicks.) But this shouldn’t be generating a 500 error unless I’m doing something wrong.

Turns out. I was.

The AuthUserFile argument needs to be the full server path to your .htpasswd file. Turns out, /public_html wasn’t actually the beginning of my server path. As a result the server was throwing a 500 error. Once I figured out what my entire full server path was, and added that to my .htaccess file, everything started working.

To Recap

  1. WordPress does not mess with requests to actual directories or files.
  2. If WordPress is messing with your request then you aren’t requesting an actual directory or file.
  3. It’s likely your Error codes aren’t setup to return actual files.
  4. Make sure your .htaccess file isn’t generating 500 errors (i.e. ensure the path to your .htpasswd file is correct).

Whew! Thank goodness that’s over. Happy Blogging ūüôā

JavaScript Scoping. Callbacks and Loops

I just ran into this issue last night. The problem: I had a loop that was adding a callback to a method. Something like this:

for(var i=0;i<10;i++){
    $myElement.on('some-event', function(){
        DoSomethingWith(i);
    });
}

What I expected was that the value of the i variable at the time it was called would be used in my callback method. However, this was not the case… the i variable was the same in every single callback.

See this JSFiddle for an example.

The reason for this? JavaScript variable hoisting.  Before your code is executed it is scanned and the variables are processed. This has the effect of moving your variables to the top of the current function regardless of where in the function they are defined. (Except for in cases where you are implicitly declaring global variables).

So, in our situation we’ve defined var i. This is processed before the loop is processed and it is as if we wrote this:

var i;
for(i=0;i<10;i++){
    $myElement.on('some-event', function(){
        DoSomethingWith(i);
    });
}

Now it becomes a bit more clear why we are running into the issue with i being the same. The reason is because by the time the callback is executed the for loop has already run and the value of the i variable is already 10.

The solution, as far as I can tell, is to use an IIFE to scope the variable correctly in order store the current value for later. It looks ugly and it feels hacky… but it seems to be what is necessary. Update: It appears that you can also use .bind to set the value correctly as well.

var i;
for(i=0;i<10;i++){
    $myElement.on('some-event', 
        (function(i){
            return function(){
                DoSomethingWith(i);
            };
        })(i);
    );
}

And the JSFiddle to demonstrate.

Example With .bind

var i;
for(i=0;i<10;i++){
    $myElement.on('some-event', DoSomethingWith.bind(undefined, i));
}