Clerk Auth Redirect/Reload Loop

I am currently looking into alternative forms of user authorization and authentication. Among the solutions I am looking into is Clerk. It’s pretty sweet and has a lot of cool features baked in. (Although the MFA support is a premium add-on and they aren’t super upfront about that).

One of the issues I ran into while implementing Clerk was a redirect loop. I set my home page within the Clerk Dashboard and when I reloaded my app, boom REDIRECTION FOR DAYS. Clerk was continuing to reload the home page for all eternity.

So, I added an onbeforeunload event into the page with a debugger call inside of it. This paused the page in the inspector before it reloaded and allowed me to actually see what was going on.

It turns out that Clerk was outputting an error message into the console. This error message is pictured below:

Clerk Error Message in JS Console

The <SignUp/> and <SignIn/> components cannot render when a user is already signed in, unless the application allows multiple sessions. Since a user is signed in and this application only allows a single session, Clerk is redirecting to the Home URL instead. (This notice only appears in development)

🔒 Clerk

Well – ok then. Clerk is redirecting to the Home URL (which is the one it’s already on) and causing a permanent redirect loop. It seems like this would be handled better by simply _not_ loading the SignIn or SignUp components should the conditions for their existence fail.

Hopefully this helps you out! You might consider making your home page and your sign in pages different pages, or conditionally load those components, so that Clerk can be happy and not mess things up.

POST Body not Included with JavaScript Fetch

I just ran into a problem and I wanted to document it for myself and for anyone else who might have issues. First I describe the problem, then I give the solution. Scroll down if you’re looking for the solution.

The Problem

After posting with JavaScript fetch I did not see the “body” arguments come through on the server. The method of the fetch was set to POST and the body included was an instance of FormData. According to the documentation on MDN, everything should’ve worked. So why wasn’t it working?

The Basic Client Side Code

const body = new FormData(myForm)
// assume myForm.action = "https://example.com/ajax/post"
const response = await fetch(myForm.action, {
	method: "post",
	body,
})

The Basic Server Side Code

<?php
// file: index.php within the ajax/post directory

// don't bother processing the post if there is none
if(empty($_POST)){
	exit;
}

// ... processing code below

I spent some time debugging and without a doubt, every POST request to the index.php file did not have the $_POST array filled out. The POST array was empty as well as the REQUEST array, even the oft-touted file_get_contents('php://input') came up empty.

The Solution

You aren’t going to like it. I don’t like it. The solution to this problem is so annoying that you’ll just facepalm like Picard.

Add a slash to the end of the url you are posting to.

The problem url is: https://example.com/ajax/post
The working url is: https://example.com/ajax/post/

Currently, when this url is posted to, the server responds with a 301 Redirect before the index.php file is hit. But why? The problem is that you do not have a trailing slash in your url. That’s it. You are posting to an index.php file within a directory, but your url does not have a trailing slash. So your server helpfully redirects you to a url with a trailing slash, and you lose your posted information along the way.

Yep, that’s it. Add a trailing slash and you’ll see your body come through when debugging.

RocketChat server not running. Cancelling

As you might know, I’ve set up a RocketChat server recently on Digital Ocean. So far it’s been working great. An update every once and a while is all it needs.

However, yesterday, I attempted an update that failed. From then on every attempted update resulted in “RocketChat server not running. Cancelling”. This was very frutrating.

First, a few commands to try that might help:

  1. systemctl restart rocketchat.service – This will start your RocketChat server in case it is stopped.
  2. systemctl status rocketchat.service – Use this command to check the results of the previous command. Typically this will report that the service is “Active” if the previous command was successful.

In my case, the second command resulted in a “failed” state. The command itself gave me some information as to what the failure was, but not a lot of context as to what caused the failure. However, it did show me the process that it attempted to run. It said, ExecStart=/opt/nvm/versions/node/v14.19.3/bin/node /opt/Rocket.Chat/main.js (code=exited, status=1/FAILURE).

Alright! We’re getting somewhere. With that I was able to figure out what command failed and where that command was run. I navigated directly to the /opt/Rocket.Chat directory which was where the failure was occurring. From here I ran node main.js. The results of this command were much more helpful. They told me this, Error: Cannot find module '@meteorjs/reify/lib/runtime'. That looks like an issue with npm dependencies.

So, I poked around the Rocket.Chat directory structure and looked for dependencies for the Rocket.Chat server. I found what I was looking for in the /opt/Rocket.Chat/programs/server directory.

From this directory I ran two commands

  1. npm install
  2. npm ci

Afterwards I attempted to start the RocketChat server again using the systemctl restart rocketchat.service command. I checked it with systemctl status rocketchat.service and found that it was working now! RocketChat was back to running normally. The problem with “RocketChat server not running. Cancelling” was gone!

Google Fi Auto Connect Issues

Ok, so I use Google Fi (formerly known as Project Fi) as my phone provider. I have a Pixel 2 and haven’t felt the need to upgrade. Recently I’ve noticed issues with my service. Specifically, my Pixel 2 will connect to an H+ network or an Edge network in an area I know has reliable 4G LTE. So, what gives?

First a quick and dirty explanation of the Google Fi network based on my limited understanding 😁. Google Fi utilizes the TMobile (which I believe includes Sprint now) and US Cellular networks as well as WI-Fi to provide cellular service to their customers. Phones on the Google Fi network smartly switch to whatever provider has the best signal. At least that’s the idea.

Knowing that Fi uses multiple cell networks to provide service I wondered what network my phone was using. Using SignalCheck Lite I was able to determine that my phone was connecting to the TMobile network by default. In my area US Cellular beats TMobile coverage hands down. There is no competition. So what is the deal with my phone auto connecting to Edge and H+ networks?

Honestly, I don’t know yet. I strongly suspect a recent update to the Google Fi app or services set my phone to prefer TMobile regardless of network speed. Whether this was an intentional change or a bug in the auto-connect code, I don’t know. I’ve been able to temporarily fix this issue by forcing a connection to US Cellular using Google Fi dialer code: *#*#34872#*#*

Google Fi Dialer Codes

I pulled these codes come from this post on ArkieNet. I’m including them here just in case the post poofs from the internet in the future.

Note this paragraph from the original article:

The following options are only available for “Designed for Fi” phones. They will not work on the iPhone or “Compatible with Fi” phones because they are T-Mobile only.  See which class of phone you have here.

ArkieNet
ALPHA CODEDIALER CODEDESCRIPTION
FI AUTO*#*#342886#*#*Set carrier selection to automatic.
FI NEXT*#*#346398#*#*Select Next Carrier
FI SPR*#*#34777#*#*Select Sprint for 2 hours
FI TMO*#*#34866#*#*Select T-Mobile 2 hours
FI USC*#*#34872#*#*Select US Cellular 2 hours
FI SIMON*#*#3474666#*#*Select Three (UK only)
ALPHA CODE
DIALER CODE
DESCRIPTION
FIXME*#*#34963#*#*Force reactivation
FI INFO*#*#344636#*#*Get information about the current network.
INFO*#*#4636#*#*Get general phone information.
DEBUG*#*#33284#*#*Phone Debug Options
PRL*#*#775#*#*Force download of Preferred Roaming List (Sprint)
PRL*228Force download of Preferred Roaming List (US Cellular)
FI ROAM*#*#347626#*#*Turn on International Roaming
SWITCH
SIM
*#*#794824746#*#*Switch to / from eSim.

Chassis.io Timeout Issue

TL:DR -> Try enabling Virtualization in your BIOS.

I’m trying out http://chassis.io as a way to easily setup a WordPress development environment on Windows. It’s actually quite easy and everything works almost exactly like the Chassis Get Started guide describes.

However, I ran into a timeout issue when attempting to boot up the Virtual Machine using vagrant up. On first run the process installed necessary dependencies and wired most things up. However, it hung for a considerable amount of time when booting up the virtual machine. Eventually it told me that it had timed out. It didn’t start the virtual machine.

VT-x/AMD-V hardware acceleration is not available on your system

Hrmm… I wonder why it’s timing out. Chassis.io uses Vagrant and VirtualBox. So I spun up VirtualBox to see if I could manually start the VM myself. As it turns out, I could not. VirtualBox threw up the following error:

VirtualBox - Error
VT-x/AMD-V hardware acceleration is not available on your system. Your 64-bit guest will fail to detect a 64-bit CPU and will not be able to boot.

Well, that’s nice… (Hint: it’s not nice).

First Try: Disabling Hyper-V

I did some searching. I found a number of posts that indicated the solution was to disable Hyper-V. It sounds like this works for a lot of people. Scott Hanselman actually wrote up a post about how to “Switch easily between VirtualBox and Hyper-V with a BCDEdit boot Entry in Windows 8.1“. I tried this approach. It did not work for me (you can remove a bcdEdit entry using bcdedit /delete {ENTRYGUID} btw).

Second Try: Enabling Virtualization via BIOS

During my search I stumbled upon this SuperUser answer. The answer indicated that, depending on your system, Virtualization could be enabled via the BIOS.

In my case, enabling Virtualization via BIOS involved booting to the UEFI Firmware Settings. I’ve outlined the steps below.

  1. Hold down the Shift key while you click Restart. This will cause your computer to bring up a special menu.

    Hold Down Shift and Restart
    Hold down “SHIFT” and click Restart
  2. Next you need to navigate the option screens to find “UEFI Firmware Settings”
    1. Select “Troubleshoot”
    2. Select “Advanced options”
    3. Select “UEFI Firmware Settings”
    4. Restart

    Steps to UEFI Firmware Settings
    Steps to UEFI Firmware Settings
  3. This will reboot you into your PC’s UEFI settings which looks a lot like a typical BIOS menu.
  4. Enable Virtualization
    Your system may be different. My system had a “Virtualization” setting located under the “Security” tab. Once I located the “Virtualization” setting I noticed that “Intel (R) Virtualization Technology” was indeed set to Disabled. I enabled it, saved the setting, and restarted my machine.

    Enable Virtualization via BIOS
    Enable Virtualization via BIOS

After enabling “Virtualization” I tried to start the VirtualBox VM one more time. BOOM. It worked. I ran vagrant up via a ConEmu console and… success.

In Conclusion

Chassis.io is a pretty sweet project. If your system is setup correctly then Chassis.io “just works”. In my case my system needed “Virtualization” enabled via a UEFI Firmware Setting.

Have a stupendous day! 🙂

Entity Framework: Update-Database Migrates the Wrong DB

Recently I made the switch from using Visual Studio 2015 to using Visual Studio 2017. For the most part the transition was easy. However, I ran into an issue with Entity Framework updating the wrong database. I’m posting the solution here so I don’t forget 🙂

TL:DR
If you are experiencing issues with Entity Framework then check that your startup project is the correct one.

EF Update-Database Is Not Working

My current setup involves using a local SQL Server Express database. I check the database via SQL Server Management Studio (ManStu) when I run Update-Database to ensure my changes take place. When I run Update-Database from Visual Studio 2015 the changes are reflected in the database. When I run Update-Database from Visual Studio 2017 the changes are not reflected in the database.

Why does Update-Database work correctly in Visual Studio 2015 but not correctly in Visual Studio 2017? Why does Visual Studio 2017 tell me that the changes were applied successfully?

I decided to take a look at the output of Update-Database -Verbose to see if it yielded any helpful information. There I saw:

Target database is: 'MySpecialDB' (DataSource: (localdb)\v11.0, Provider: System.Data.SqlClient, Origin: Convention).

Entity Framework was using (localdb) and not the SQL Server Express database I setup in the app.config. That explains why the changes were applied successfully. However, why was Entity Framework using the wrong database?

The Not So Thrilling Simple Solution

I pursued a number of different routes looking for the solution to this issue. In the end the solution is so simple. The wrong startup project was selected. That’s it. In Visual Studio 2015 I was using a different startup project. In Visual Studio 2017 I never setup a startup project and so one was selected automatically.

As it turns out Entity Framework pulls meaningful information (like database connection information) out of the startup project. The fact that I had the wrong startup project selected in Visual Studio 2017 was the reason why my Entity Framework Update-Database commands were not working the way I expected.

So, lesson learned, if you are experiencing issues with Entity Framework then check your startup project. It could be that you have the wrong startup project selected 🙂

Fixing UI-SREF-ACTIVE – Specifying a Default Abstract State

I’ve recently begun working with Angular and by extension Angular UI-Router. The fact that you are reading this means that you likely have as well. That said, let’s all pause for a moment and cry together. I know it’s hard. You will get through it. It will be ok. We can do this.

Basic ui-sref-active Usage

One of the things that UI-Router gives you is the ability to add a class to an element if that elements state is currently active. This is done via the ui-sref-active directive.


<ul class="nav navbar-nav" ng-controller="navController">
   
   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="home">Home</a></li>

   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="notHome">Not Home</a></li>

</ul>

So above we have some basic navigation with two states. The home state and the notHome state. The ui-sref-active directive takes care of adding the active class to whichever li contains the state that is currently active.

The Problem with Abstract States

The problem is that the ui-sref-active directive does not work correctly (or as we expect) when the parent state is an abstract state.

Let’s say you want to expand your “home” state a bit. Maybe you want to add a “dashboard” state and from there link to a “messages” state. You might set up your $stateProvider a bit like this.

$stateProvider
	.state("home",
	{
		abstract: true,
		url: "/home"
	})
	.state("home.dashboard", {
		url: "/dashboard",
		views: {
			"content@": {
				templateUrl: "app/home/dashboard.html",
				controller: "DashboardController"
			}
		}
	})
   .state("home.messages", {
		url: "/messages",
		views: {
			"content@": {
				templateUrl: "app/home/messages.html",
				controller: "MessagesController"
			}
		}
	});

You’ll see we’ve setup home as an abstract view. By default we want to land on our home.dashboard state. We also want ui-sref-active to set the active class on our “Home” link regardless of which child state we are on.


<ul class="nav navbar-nav" ng-controller="navController">
   
   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="home.dashboard">Home</a></li>


   <li class="nav-node nav" ui-sref-active="active"><a ui-sref="notHome">Not Home</a></li>

</ul>

You will notice that in the code above we are now using ui-sref to link to  home.dashboard. This is where the problem with ui-sref-active crops up, it will only show the active class if the state is home.dashboard. We want the active class to appear on any child of the “home” state. As it is, the ui-sref-active directive will not detect home.messages as active. So the question becomes, “how can we fix ui-sref-active so that it detects a parent abstract state”?

Fixing ui-sref-active

The answer comes from Tom Grant in the form of a comment on a GitHub issue.

Tom informs us that there is an undocumented built in solution to this ui-sref-active problem. The solution, he says, is to “use an object (like with ng-class) and it will work”.

Code examples that Tom provides:

<!-- This will not work -->
<li ui-sref-active="active">
   <a ui-sref="admin.users">Administration Panel</a>
</li>

<!-- This will work -->
<li ui-sref-active="{ 'active': 'admin' }">
   <a ui-sref="admin.users">Administration Panel</a>
</li>

That’s it. Now we can link to children of abstract ui-router states and ui-sref-active will behave the way we expect it should.

Ubuntu Server not completing upgrade

It’s been about seven months since I setup a Wireless GitLab server. Since then I’ve figured out how to list updatable packages on Ubuntu Server. I’ve also performed several updates using sudo apt-get update && sudo apt-get upgrade.

gzip: stdout: No space left on device

Today I ran into a new problem. Upon trying to perform an update I was presented with a peculiar error. It said gzip: stdout: No space left on device and it told me to run apt-get -f install to fix things up. So… that’s what I tried doing. I tried running the apt-get -f install command but to no avail. The command would not complete successfully.

This is about the time when I start getting really annoyed with Linux and the command line and all the things associated with configuring things manually like do I really need to download the entirety of the Linux MAN files inside my HEAD? DO I NEED TO DO THAT? GAHasldkjsadljfsadfsdsdf!!!!

Calm yourself.

The /boot partition is 100% full

Ok, so it turns out that the apt-get process can fail if the /boot partition becomes 100% full. There were a number of suggestions online that indicated you needed to clean out the /boot partition by removing old linux-images that you don’t need anymore. Many of these suggestions involved using sudo apt-get remove [package-name] or using sudo apt-get autoremove which are both completely valid options… IF APT-GET WERE WORKING. But apt-get is not working, that’s the problem.

So… I Googled a lot and dug through a lot of forums. Finally I stumbled on this uber helpful answer on askUbuntu. I’ll go ahead and paraphrase the answer below so that I can easily find it again. Yes. This is all about me.

Cleaning up the /boot partition

In the case where your /boot partition becomes totally full you can use these steps to clean it up. (From flickerfly on AskUbuntu).

  1. Run the following command to get a list of the linux-image files that you don’t need anymore.
    sudo dpkg --list 'linux-image*'|awk '{ if ($1=="ii") print $2}'|grep -v `uname -r`
    
  2. Create a command to remove the folders you don’t need anymore. You can do that with a command like this (where brace-expansion is used to save keystrokes). Use the output from the command above to build your command.
    EXAMPLE
    sudo rm -rf /boot/*-3.2.0-{23,45,49,51,52,53,54,55}-*
    
  3. Now that apt-get has space to work with you can actually run sudo apt-get -f install to clean things up.
  4. Use Purge to manually resolve issues with “Internal Errors” (if you get any internal errors).
    EXAMPLE
    sudo apt-get purge linux-image-3.2.0-56-generic
    
  5. Run `sudo apt-get autoremove ` to clean up anything orphaned by the manual clean.
  6. Now you can finally proceed with those updates you were wanting to do.
    sudo apt-get update && sudo apt-get upgrade
    

Party?

We can party now I think.

Secrets were required, but not provided

Psst… tl:dr -> rebooting my wireless router fixed the problem.

A few months ago I setup a wireless Gitlab server. This server has been working great. Once in a while I check up on it via SSH and make sure it’s updated. Otherwise, I leave it alone and it’s happy.

That is until today.

Secrets were required, but not provided

Today, for some reason, I could not access my gitlab server via the web interface or push to it via the git cli. In fact, I couldn’t even SSH into it. I had to pull out the ol’ physical monitor and keyboard and MANUALLY connect. Shudder.

First thing I do upon connecting to the server is try to ping google.com of course. It didn’t work. The server could not find the address for Google, and as anyone knows, if you cannot find Google then the internet does not exist. Plain and simple. You might as well be trying to fly a kite in the Marianas trench.

Now, up until this point I’ve had no issues connecting to the internet. The server automatically connects to the WIFI no big deal. However, I thought that maybe the network authorization expired or something? Maybe I can only connect for a few months at a time before re-authenticating. So I tried just that. I whipped out my old friend nmcli and ran:

nmcli device wifi connect MyAccessPoint password 123456789ACB

This is when I see the dreaded Secrets were required, but not provided response. Well – I’m not sure what secrets it wants me to tell it but I mean, the password was right, and I’m certainly not telling it who my favorite little pony is or even if I like little ponies.

I tried several more times, I tried rebooting the server. Nothing worked. It was at that point that I, using another computer, logged into my wireless router and told it to reboot. Several minutes later everything is fine.

Rebooting my router fixed the problem

Why? I don’t know. There has been some talk that some routers will auto select channels that some linux machines do not like. I think that was likely the original issue. Rebooting the router worked because another channel was selected. In any case… long story short. If you are having issues with your wireless server not connecting to your network then try rebooting your router.

Password Protect a WordPress Subdirectory with .htaccess

There are questions all over the internet regarding how to password protect a sub-directory when you are using WordPress.

I just spent a long time fighting a frustrating battle with this as well. So I’m documenting the resolution here for my (and anyone’s) benefit.

 In short

  1. WordPress does not mess with requests to actual directories or files.
  2. If WordPress is messing with your request then you aren’t requesting an actual directory or file.
  3. It’s likely your Error codes aren’t setup to return actual files.
  4. Make sure your .htaccess file isn’t generating 500 errors (i.e. ensure the path to your .htpasswd file is correct).

Problem

I’ve added a .htaccess and .htpasswd file but all I see is a WordPress 404 page. I can’t stop crying because it’s not working and my brain hurts.

Yep. That happens. WordPress comes with the following .htaccess file by default:

# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress

Let’s break this down.  First we are checking if the mod_rewrite module is even installed. If it is then we are turning the RewriteEngine on. That’s all great. We wouldn’t want to use the engine if it didn’t exist… right?

RewriteBase / – This sets the base of every subsequent Rule and Condition to the root `/`.  This way we don’t have to include the root directory at the beginning of any of our rules.

RewriteRule ^index\.php$ - [L] – This rewrite rule checks to see if we are on the index.php page already. The dash in the rule means do nothing. So… if we are already on index.php don’t do anything. The [L] option means that we should stop processing rules now. Don’t do anything else, we’ve got what we wanted. Quite literally this is the [L]ast rule that should be processed.

RewriteCond %{REQUEST_FILENAME} !-f – This condition makes sure that if the current request is hitting an actual existing file then we should do nothing. So WordPress won’t mess with your requests if you try to link to an actual file.

RewriteCond %{REQUEST_FILENAME} !-d – This condition makes sure that if the current request is hitting an actual existing directory that we should do nothing. So WordPress won’t mess with your requests if you try to link to an actual directory.

RewriteRule . /index.php [L] – Finally, if our request passed the above two conditions (it’s not an actual file and not an actual directory) then map the request to index.php. Now the request is mapped and WordPress can do its thing!

That’s Great But…

I know what you are thinking. You are thinking:

If what you are saying is true, then I shouldn’t be seeing a 404 page. My password protected directory actually exists!

Yes. You are correct, your directory does exist.

Solution

When you password protect a directory with .htaccess you are telling the server to return a certain response code. The 401 response code meaning the user is unauthorized, to be precise. When the browser received this response code it triggers a username and password prompt. However, and here is the problem, the browser is never receiving the response code.

Why is the browser not receiving the response code?

Good question. If you remember the WordPress .htaccess checks if the requested url points an actual file or directory. It only rewrites you to the index.php file if you aren’t actually requesting a file or directory. When you throw the 401 response code you aren’t actually requesting a file or directory. You are essentially requesting nothing (because you are unauthorized). So the WordPress .htaccess file is behaving correctly – it’s rewriting you to the index.php page and giving you a 404 (because more than likely your password protected directory does not match a permalink on your WordPress blog).

So… if WordPress is making sure that you actually requested a file then… you need to make sure that you are actually getting a file! You can do this by adding the following line to the top of your WordPress .htaccess file:

ErrorDocument 401 default

What you are doing is telling the server to return the default 401 file when it encounters a 401 response code. Once you are returning an actual file WordPress won’t try to grab your request.

Ok. I added that and I’m still having issues. What gives?

If you are like me, then the 401 response code fix wasn’t enough. You are still having the same issue and by now you are wanting to… oh gosh I can’t even think of anything to describe this type of pain.

Let’s look at our .htaccess file we are using to password protect our sub-directory. If you are anything like me your file might’ve looked something like this.

AuthType Basic
AuthName "Password Protected Area"
AuthUserFile /public_html/jeremysawesome.com/mySecretDirectory/.htpasswd
Require valid-user

This looks perfectly valid to me. However, it turns out this file is generating Internal Server Errors!  (I know because I added a ErrorDocument 500 default line to my WordPress .htaccess file just for kicks.) But this shouldn’t be generating a 500 error unless I’m doing something wrong.

Turns out. I was.

The AuthUserFile argument needs to be the full server path to your .htpasswd file. Turns out, /public_html wasn’t actually the beginning of my server path. As a result the server was throwing a 500 error. Once I figured out what my entire full server path was, and added that to my .htaccess file, everything started working.

To Recap

  1. WordPress does not mess with requests to actual directories or files.
  2. If WordPress is messing with your request then you aren’t requesting an actual directory or file.
  3. It’s likely your Error codes aren’t setup to return actual files.
  4. Make sure your .htaccess file isn’t generating 500 errors (i.e. ensure the path to your .htpasswd file is correct).

Whew! Thank goodness that’s over. Happy Blogging 🙂