GPG Signing on Windows with an SSH Key

Summary

I just experienced an issue which took me a day to figure out. So, as per normal, I’m going to document it here so that in the future I don’t have to bother looking it up!

My problem: every time I try to commit using git I am asked for my SSH key passphrase. However, I’ve ensured the SSH agent is running. I can pull from the remote, push to the remote, and do anything with the remote repository without needing to insert my SSH key multiple times. However, when I try to commit via git, I’m asked for my passphrase on every commit. Why is this a problem? Because when I rebase a 150+ branch of commits I’m having to enter my passphrase 150 times consecutively. That’s unsustainable.

As you probably know, Git Supports signing your commits with an SSH key these days. Christopher, over on dev.to has a great write up that can get you started.

Still, even armed with Christopher’s information, I couldn’t figure out how to fix the problem I as running into.

Since this post is meant to help guide me in setting this up from scratch (AGAIN) I’m going to go through all the relevant steps. But if you are just looking for the solution to the problem then skip to here.

Setup

Everyone’s setup is a bit different, but I’m going to document the relevant portions of mine.

Environment

  1. Clean Install of Windows 11
  2. Git
    Note: When installing I select the “Use Git and optional Unix tools from the Command Prompt”
  3. PowerShell Core
  4. posh-git

If you want to automatically start the ssh agent, which I’d recommend, you can set the service to start automatically.

Restart my computer and I’m good to go.

SSH Setup

After my environment is setup, I set up SSH. This involved creating a new key, adding it to the SSH Agent, and making sure the agent was running. I run all these commands from an instance of Powershell Core (installed above).

Generating a New SSH Key

Since this is a new machine, I generated a new SSH key using the steps on GitHub

Basically the step is, use the following command, accept the default location, and use a secure passphrase. (1Password offers an excellent password generator if you need inspiration. I love the memorable password feature, XKCD ftw)

ssh-keygen -t ed25519 -C "[email protected]"

Adding the SSH Key to the SSH Agent

Adding the key to the agent is simple enough. You just need to run a command and give it the location of the private SSH key you generated in the step above. In my case that’s as simple as:

ssh-add c:/Users/YOU/.ssh/id_ed25519

Start the SSH Agent automatically.
Refer to: Starting the SSH Agent service automatically.

Now I would either typically restart my computer, or close the current PowerShell Core window and open a new one.

Adding the SSH Key to GitHub

If you want to sign commits with GitHub then you need to upload your SSH key 2wice (that’s a clever way of writing twice). One upload for your “Authentication Key” which will be for access to the repo. The second upload for your Signing Key. You can do both at https://github.com/settings/keys just click “Add Key” in the SSH Keys section and upload one for Authentication and then click “Add Key” again and upload one for Signing.

Signing Git Commits

Once again, referring to Christopher’s post. I do the following, making sure to switch out the path to my pub SSH key:

git config --global gpg.format ssh
git config --global user.signingkey ~/.ssh/id_ed25519.pub
git config --global commit.gpgsign true
git config --global tag.gpgsign true

This actually comes from Ahmad’s post on StackOverflow

git config --global push.gpgSign "if-asked"

Solving the Problem

Ok. Now the problem. You’ve successfully connected to Github and cloned your repo. Now you start making commits and find that you have to insert your SSH key passphrase over and over, despite the SSH agent running.

It took me a long time to figure out, but the problem is likely that you have two versions of SSH on your machine. The one that comes with Windows by default, and the one that came with Git for Windows when it was installed.

To solve this problem, tell Git For Windows to “Use external OpenSSH” when installing Git. The following comes from this answer on Stack Overflow by Ajedi32.

If you are using Windows’ native implementation of OpenSSH with the native ssh-agent Windows service, make sure that git for Windows was configured to use that SSH implementation when you installed it:

Screenshot of Git for Windows installer; choosing the SSH executable. The "Use external OpenSSH" option is selected.

If you used the bundled OpenSSH installation, git will default to that and will not use any keys imported into Windows’ native ssh-agent service. You need to select “Use external OpenSSH” instead when prompted.

If you did not make this selection when installing, you should be able to fix that by just running the installer again.

Ajedi32

And that’s that!

Previous Versions of this article said the following. This is no longer what I recommend.

You’ll remember we selected the “Use Git and optional Unix tools from the Command Prompt” option. This option will add the referenced Git and Unix tools to your systems Path. The SSH Agent that is registered and used for Authentication is the one that comes installed with windows. The one that Git is using for signing, is the one that comes with git.

The order of the PATH variables will determine which version of the ssh-agent is used. The first version encountered will be the one that windows uses. So, to solve this problem, make sure the C:\Program Files\Git\usr\bin is above the %SYSTEMROOT%\System32\OpenSSH\ in your system path.

Clerk Auth Redirect/Reload Loop

I am currently looking into alternative forms of user authorization and authentication. Among the solutions I am looking into is Clerk. It’s pretty sweet and has a lot of cool features baked in. (Although the MFA support is a premium add-on and they aren’t super upfront about that).

One of the issues I ran into while implementing Clerk was a redirect loop. I set my home page within the Clerk Dashboard and when I reloaded my app, boom REDIRECTION FOR DAYS. Clerk was continuing to reload the home page for all eternity.

So, I added an onbeforeunload event into the page with a debugger call inside of it. This paused the page in the inspector before it reloaded and allowed me to actually see what was going on.

It turns out that Clerk was outputting an error message into the console. This error message is pictured below:

Clerk Error Message in JS Console

The <SignUp/> and <SignIn/> components cannot render when a user is already signed in, unless the application allows multiple sessions. Since a user is signed in and this application only allows a single session, Clerk is redirecting to the Home URL instead. (This notice only appears in development)

? Clerk

Well – ok then. Clerk is redirecting to the Home URL (which is the one it’s already on) and causing a permanent redirect loop. It seems like this would be handled better by simply _not_ loading the SignIn or SignUp components should the conditions for their existence fail.

Hopefully this helps you out! You might consider making your home page and your sign in pages different pages, or conditionally load those components, so that Clerk can be happy and not mess things up.

POST Body not Included with JavaScript Fetch

I just ran into a problem and I wanted to document it for myself and for anyone else who might have issues. First I describe the problem, then I give the solution. Scroll down if you’re looking for the solution.

The Problem

After posting with JavaScript fetch I did not see the “body” arguments come through on the server. The method of the fetch was set to POST and the body included was an instance of FormData. According to the documentation on MDN, everything should’ve worked. So why wasn’t it working?

The Basic Client Side Code

const body = new FormData(myForm)
// assume myForm.action = "https://example.com/ajax/post"
const response = await fetch(myForm.action, {
	method: "post",
	body,
})

The Basic Server Side Code

<?php
// file: index.php within the ajax/post directory

// don't bother processing the post if there is none
if(empty($_POST)){
	exit;
}

// ... processing code below

I spent some time debugging and without a doubt, every POST request to the index.php file did not have the $_POST array filled out. The POST array was empty as well as the REQUEST array, even the oft-touted file_get_contents('php://input') came up empty.

The Solution

You aren’t going to like it. I don’t like it. The solution to this problem is so annoying that you’ll just facepalm like Picard.

Add a slash to the end of the url you are posting to.

The problem url is: https://example.com/ajax/post
The working url is: https://example.com/ajax/post/

Currently, when this url is posted to, the server responds with a 301 Redirect before the index.php file is hit. But why? The problem is that you do not have a trailing slash in your url. That’s it. You are posting to an index.php file within a directory, but your url does not have a trailing slash. So your server helpfully redirects you to a url with a trailing slash, and you lose your posted information along the way.

Yep, that’s it. Add a trailing slash and you’ll see your body come through when debugging.

RocketChat server not running. Cancelling

As you might know, I’ve set up a RocketChat server recently on Digital Ocean. So far it’s been working great. An update every once and a while is all it needs.

However, yesterday, I attempted an update that failed. From then on every attempted update resulted in “RocketChat server not running. Cancelling”. This was very frutrating.

First, a few commands to try that might help:

  1. systemctl restart rocketchat.service – This will start your RocketChat server in case it is stopped.
  2. systemctl status rocketchat.service – Use this command to check the results of the previous command. Typically this will report that the service is “Active” if the previous command was successful.

In my case, the second command resulted in a “failed” state. The command itself gave me some information as to what the failure was, but not a lot of context as to what caused the failure. However, it did show me the process that it attempted to run. It said, ExecStart=/opt/nvm/versions/node/v14.19.3/bin/node /opt/Rocket.Chat/main.js (code=exited, status=1/FAILURE).

Alright! We’re getting somewhere. With that I was able to figure out what command failed and where that command was run. I navigated directly to the /opt/Rocket.Chat directory which was where the failure was occurring. From here I ran node main.js. The results of this command were much more helpful. They told me this, Error: Cannot find module '@meteorjs/reify/lib/runtime'. That looks like an issue with npm dependencies.

So, I poked around the Rocket.Chat directory structure and looked for dependencies for the Rocket.Chat server. I found what I was looking for in the /opt/Rocket.Chat/programs/server directory.

From this directory I ran two commands

  1. npm install
  2. npm ci

Afterwards I attempted to start the RocketChat server again using the systemctl restart rocketchat.service command. I checked it with systemctl status rocketchat.service and found that it was working now! RocketChat was back to running normally. The problem with “RocketChat server not running. Cancelling” was gone!

Google Fi Auto Connect Issues

Ok, so I use Google Fi (formerly known as Project Fi) as my phone provider. I have a Pixel 2 and haven’t felt the need to upgrade. Recently I’ve noticed issues with my service. Specifically, my Pixel 2 will connect to an H+ network or an Edge network in an area I know has reliable 4G LTE. So, what gives?

First a quick and dirty explanation of the Google Fi network based on my limited understanding ?. Google Fi utilizes the TMobile (which I believe includes Sprint now) and US Cellular networks as well as WI-Fi to provide cellular service to their customers. Phones on the Google Fi network smartly switch to whatever provider has the best signal. At least that’s the idea.

Knowing that Fi uses multiple cell networks to provide service I wondered what network my phone was using. Using SignalCheck Lite I was able to determine that my phone was connecting to the TMobile network by default. In my area US Cellular beats TMobile coverage hands down. There is no competition. So what is the deal with my phone auto connecting to Edge and H+ networks?

Honestly, I don’t know yet. I strongly suspect a recent update to the Google Fi app or services set my phone to prefer TMobile regardless of network speed. Whether this was an intentional change or a bug in the auto-connect code, I don’t know. I’ve been able to temporarily fix this issue by forcing a connection to US Cellular using Google Fi dialer code: *#*#34872#*#*

Google Fi Dialer Codes

I pulled these codes come from this post on ArkieNet. I’m including them here just in case the post poofs from the internet in the future.

Note this paragraph from the original article:

The following options are only available for “Designed for Fi” phones. They will not work on the iPhone or “Compatible with Fi” phones because they are T-Mobile only.  See which class of phone you have here.

ArkieNet
ALPHA CODEDIALER CODEDESCRIPTION
FI AUTO*#*#342886#*#*Set carrier selection to automatic.
FI NEXT*#*#346398#*#*Select Next Carrier
FI SPR*#*#34777#*#*Select Sprint for 2 hours
FI TMO*#*#34866#*#*Select T-Mobile 2 hours
FI USC*#*#34872#*#*Select US Cellular 2 hours
FI SIMON*#*#3474666#*#*Select Three (UK only)
ALPHA CODE
DIALER CODE
DESCRIPTION
FIXME*#*#34963#*#*Force reactivation
FI INFO*#*#344636#*#*Get information about the current network.
INFO*#*#4636#*#*Get general phone information.
DEBUG*#*#33284#*#*Phone Debug Options
PRL*#*#775#*#*Force download of Preferred Roaming List (Sprint)
PRL*228Force download of Preferred Roaming List (US Cellular)
FI ROAM*#*#347626#*#*Turn on International Roaming
SWITCH
SIM
*#*#794824746#*#*Switch to / from eSim.

Chassis.io Timeout Issue

TL:DR -> Try enabling Virtualization in your BIOS.

I’m trying out http://chassis.io as a way to easily setup a WordPress development environment on Windows. It’s actually quite easy and everything works almost exactly like the Chassis Get Started guide describes.

However, I ran into a timeout issue when attempting to boot up the Virtual Machine using vagrant up. On first run the process installed necessary dependencies and wired most things up. However, it hung for a considerable amount of time when booting up the virtual machine. Eventually it told me that it had timed out. It didn’t start the virtual machine.

VT-x/AMD-V hardware acceleration is not available on your system

Hrmm… I wonder why it’s timing out. Chassis.io uses Vagrant and VirtualBox. So I spun up VirtualBox to see if I could manually start the VM myself. As it turns out, I could not. VirtualBox threw up the following error:

VirtualBox - Error
VT-x/AMD-V hardware acceleration is not available on your system. Your 64-bit guest will fail to detect a 64-bit CPU and will not be able to boot.

Well, that’s nice… (Hint: it’s not nice).

First Try: Disabling Hyper-V

I did some searching. I found a number of posts that indicated the solution was to disable Hyper-V. It sounds like this works for a lot of people. Scott Hanselman actually wrote up a post about how to “Switch easily between VirtualBox and Hyper-V with a BCDEdit boot Entry in Windows 8.1“. I tried this approach. It did not work for me (you can remove a bcdEdit entry using bcdedit /delete {ENTRYGUID} btw).

Second Try: Enabling Virtualization via BIOS

During my search I stumbled upon this SuperUser answer. The answer indicated that, depending on your system, Virtualization could be enabled via the BIOS.

In my case, enabling Virtualization via BIOS involved booting to the UEFI Firmware Settings. I’ve outlined the steps below.

  1. Hold down the Shift key while you click Restart. This will cause your computer to bring up a special menu.

    Hold Down Shift and Restart
    Hold down “SHIFT” and click Restart
  2. Next you need to navigate the option screens to find “UEFI Firmware Settings”
    1. Select “Troubleshoot”
    2. Select “Advanced options”
    3. Select “UEFI Firmware Settings”
    4. Restart

    Steps to UEFI Firmware Settings
    Steps to UEFI Firmware Settings
  3. This will reboot you into your PC’s UEFI settings which looks a lot like a typical BIOS menu.
  4. Enable Virtualization
    Your system may be different. My system had a “Virtualization” setting located under the “Security” tab. Once I located the “Virtualization” setting I noticed that “Intel (R) Virtualization Technology” was indeed set to Disabled. I enabled it, saved the setting, and restarted my machine.

    Enable Virtualization via BIOS
    Enable Virtualization via BIOS

After enabling “Virtualization” I tried to start the VirtualBox VM one more time. BOOM. It worked. I ran vagrant up via a ConEmu console and… success.

In Conclusion

Chassis.io is a pretty sweet project. If your system is setup correctly then Chassis.io “just works”. In my case my system needed “Virtualization” enabled via a UEFI Firmware Setting.

Have a stupendous day! 🙂

Entity Framework: Update-Database Migrates the Wrong DB

Recently I made the switch from using Visual Studio 2015 to using Visual Studio 2017. For the most part the transition was easy. However, I ran into an issue with Entity Framework updating the wrong database. I’m posting the solution here so I don’t forget 🙂

TL:DR
If you are experiencing issues with Entity Framework then check that your startup project is the correct one.

EF Update-Database Is Not Working

My current setup involves using a local SQL Server Express database. I check the database via SQL Server Management Studio (ManStu) when I run Update-Database to ensure my changes take place. When I run Update-Database from Visual Studio 2015 the changes are reflected in the database. When I run Update-Database from Visual Studio 2017 the changes are not reflected in the database.

Why does Update-Database work correctly in Visual Studio 2015 but not correctly in Visual Studio 2017? Why does Visual Studio 2017 tell me that the changes were applied successfully?

I decided to take a look at the output of Update-Database -Verbose to see if it yielded any helpful information. There I saw:

Target database is: 'MySpecialDB' (DataSource: (localdb)\v11.0, Provider: System.Data.SqlClient, Origin: Convention).

Entity Framework was using (localdb) and not the SQL Server Express database I setup in the app.config. That explains why the changes were applied successfully. However, why was Entity Framework using the wrong database?

The Not So Thrilling Simple Solution

I pursued a number of different routes looking for the solution to this issue. In the end the solution is so simple. The wrong startup project was selected. That’s it. In Visual Studio 2015 I was using a different startup project. In Visual Studio 2017 I never setup a startup project and so one was selected automatically.

As it turns out Entity Framework pulls meaningful information (like database connection information) out of the startup project. The fact that I had the wrong startup project selected in Visual Studio 2017 was the reason why my Entity Framework Update-Database commands were not working the way I expected.

So, lesson learned, if you are experiencing issues with Entity Framework then check your startup project. It could be that you have the wrong startup project selected 🙂

Fixing UI-SREF-ACTIVE – Specifying a Default Abstract State

I’ve recently begun working with Angular and by extension Angular UI-Router. The fact that you are reading this means that you likely have as well. That said, let’s all pause for a moment and cry together. I know it’s hard. You will get through it. It will be ok. We can do this.

Basic ui-sref-active Usage

One of the things that UI-Router gives you is the ability to add a class to an element if that elements state is currently active. This is done via the ui-sref-active directive.


<ul class="nav navbar-nav" ng-controller="navController">
   
   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="home">Home</a></li>

   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="notHome">Not Home</a></li>

</ul>

So above we have some basic navigation with two states. The home state and the notHome state. The ui-sref-active directive takes care of adding the active class to whichever li contains the state that is currently active.

The Problem with Abstract States

The problem is that the ui-sref-active directive does not work correctly (or as we expect) when the parent state is an abstract state.

Let’s say you want to expand your “home” state a bit. Maybe you want to add a “dashboard” state and from there link to a “messages” state. You might set up your $stateProvider a bit like this.

$stateProvider
	.state("home",
	{
		abstract: true,
		url: "/home"
	})
	.state("home.dashboard", {
		url: "/dashboard",
		views: {
			"content@": {
				templateUrl: "app/home/dashboard.html",
				controller: "DashboardController"
			}
		}
	})
   .state("home.messages", {
		url: "/messages",
		views: {
			"content@": {
				templateUrl: "app/home/messages.html",
				controller: "MessagesController"
			}
		}
	});

You’ll see we’ve setup home as an abstract view. By default we want to land on our home.dashboard state. We also want ui-sref-active to set the active class on our “Home” link regardless of which child state we are on.


<ul class="nav navbar-nav" ng-controller="navController">
   
   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="home.dashboard">Home</a></li>


   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="notHome">Not Home</a></li>

</ul>

You will notice that in the code above we are now using ui-sref to link to  home.dashboard. This is where the problem with ui-sref-active crops up, it will only show the active class if the state is home.dashboard. We want the active class to appear on any child of the “home” state. As it is, the ui-sref-active directive will not detect home.messages as active. So the question becomes, “how can we fix ui-sref-active so that it detects a parent abstract state”?

Fixing ui-sref-active

The answer comes from Tom Grant in the form of a comment on a GitHub issue.

Tom informs us that there is an undocumented built in solution to this ui-sref-active problem. The solution, he says, is to “use an object (like with ng-class) and it will work”.

Code examples that Tom provides:

<!-- This will not work -->
<li ui-sref-active="active">
   <a ui-sref="admin.users">Administration Panel</a>
</li>

<!-- This will work -->
<li ui-sref-active="{ 'active': 'admin' }">
   <a ui-sref="admin.users">Administration Panel</a>
</li>

That’s it. Now we can link to children of abstract ui-router states and ui-sref-active will behave the way we expect it should.

Ubuntu Server not completing upgrade

It’s been about seven months since I setup a Wireless GitLab server. Since then I’ve figured out how to list updatable packages on Ubuntu Server. I’ve also performed several updates using sudo apt-get update && sudo apt-get upgrade.

gzip: stdout: No space left on device

Today I ran into a new problem. Upon trying to perform an update I was presented with a peculiar error. It said gzip: stdout: No space left on device and it told me to run apt-get -f install to fix things up. So… that’s what I tried doing. I tried running the apt-get -f install command but to no avail. The command would not complete successfully.

This is about the time when I start getting really annoyed with Linux and the command line and all the things associated with configuring things manually like do I really need to download the entirety of the Linux MAN files inside my HEAD? DO I NEED TO DO THAT? GAHasldkjsadljfsadfsdsdf!!!!

Calm yourself.

The /boot partition is 100% full

Ok, so it turns out that the apt-get process can fail if the /boot partition becomes 100% full. There were a number of suggestions online that indicated you needed to clean out the /boot partition by removing old linux-images that you don’t need anymore. Many of these suggestions involved using sudo apt-get remove [package-name] or using sudo apt-get autoremove which are both completely valid options… IF APT-GET WERE WORKING. But apt-get is not working, that’s the problem.

So… I Googled a lot and dug through a lot of forums. Finally I stumbled on this uber helpful answer on askUbuntu. I’ll go ahead and paraphrase the answer below so that I can easily find it again. Yes. This is all about me.

Cleaning up the /boot partition

In the case where your /boot partition becomes totally full you can use these steps to clean it up. (From flickerfly on AskUbuntu).

  1. Run the following command to get a list of the linux-image files that you don’t need anymore.
    sudo dpkg --list 'linux-image*'|awk '{ if ($1=="ii") print $2}'|grep -v `uname -r`
    
  2. Create a command to remove the folders you don’t need anymore. You can do that with a command like this (where brace-expansion is used to save keystrokes). Use the output from the command above to build your command.
    EXAMPLE
    sudo rm -rf /boot/*-3.2.0-{23,45,49,51,52,53,54,55}-*
    
  3. Now that apt-get has space to work with you can actually run sudo apt-get -f install to clean things up.
  4. Use Purge to manually resolve issues with “Internal Errors” (if you get any internal errors).
    EXAMPLE
    sudo apt-get purge linux-image-3.2.0-56-generic
    
  5. Run `sudo apt-get autoremove ` to clean up anything orphaned by the manual clean.
  6. Now you can finally proceed with those updates you were wanting to do.
    sudo apt-get update && sudo apt-get upgrade
    

Party?

We can party now I think.

Secrets were required, but not provided

Psst… tl:dr -> rebooting my wireless router fixed the problem.

A few months ago I setup a wireless Gitlab server. This server has been working great. Once in a while I check up on it via SSH and make sure it’s updated. Otherwise, I leave it alone and it’s happy.

That is until today.

Secrets were required, but not provided

Today, for some reason, I could not access my gitlab server via the web interface or push to it via the git cli. In fact, I couldn’t even SSH into it. I had to pull out the ol’ physical monitor and keyboard and MANUALLY connect. Shudder.

First thing I do upon connecting to the server is try to ping google.com of course. It didn’t work. The server could not find the address for Google, and as anyone knows, if you cannot find Google then the internet does not exist. Plain and simple. You might as well be trying to fly a kite in the Marianas trench.

Now, up until this point I’ve had no issues connecting to the internet. The server automatically connects to the WIFI no big deal. However, I thought that maybe the network authorization expired or something? Maybe I can only connect for a few months at a time before re-authenticating. So I tried just that. I whipped out my old friend nmcli and ran:

nmcli device wifi connect MyAccessPoint password 123456789ACB

This is when I see the dreaded Secrets were required, but not provided response. Well – I’m not sure what secrets it wants me to tell it but I mean, the password was right, and I’m certainly not telling it who my favorite little pony is or even if I like little ponies.

I tried several more times, I tried rebooting the server. Nothing worked. It was at that point that I, using another computer, logged into my wireless router and told it to reboot. Several minutes later everything is fine.

Rebooting my router fixed the problem

Why? I don’t know. There has been some talk that some routers will auto select channels that some linux machines do not like. I think that was likely the original issue. Rebooting the router worked because another channel was selected. In any case… long story short. If you are having issues with your wireless server not connecting to your network then try rebooting your router.