Reproducing this recipe from the website “inspired2cook.com” which doesn’t seem to exist anymore. Here’s a link to the WaybackMachine if you want to see the original.
Ingredients
These will be used throughout the recipe
2 Cups Whole Milk (not low-fat because it makes the dressing too thin)
2 Cups Mayonnaise (not low-fat because it makes the dressing too thin)
1 Tablespoon Brown Sugar
1 pound Elbow Macaroni
1/2 cup apple cider vinegar
4 green onions, sliced thin
1 large carrot, peeled and grated
1 celery rib, chopped fine
Salt & Pepper
Make Dressing
Whisk the following in a bowl:
1 ½ cups milk
1 cup mayonnaise
1 Tablespoon brown sugar
1/2 teaspoon salt
2 teaspoons pepper
Cook Pasta
Bring 4 quarts of water to boil in a large pot. Add 1 tablespoon salt and the pasta and cook for about 15 minutes.
Make Salad Step 1
Drain the pasta and transfer to a suitable sized bowl
Mix 1/2 cup Apple Cider Vinegar into the pasta
Cool Pasta – I usually cover it and put it in the fridge for about an hour
Stir about half the dressing into the pasta
Cool Completely – I usually cover it and put it in the fridge overnight
Make Salad Step 2
Add the following ingredients to the cooled salad and stir
the remaining half of the dressing
4 Green Onions, sliced thin
1 large carrot, peeled and grated
1 celery rib, chopped fine
1/2 cup milk
up to 1 cup mayonnaise (play around with this, could be you don’t want any, depends on how you want the consistency of the salad)
Salt and Pepper (to your liking, I usually grind bunch of pepper over the top so that the top is nicely speckled ?)
I just experienced an issue which took me a day to figure out. So, as per normal, I’m going to document it here so that in the future I don’t have to bother looking it up!
My problem: every time I try to commit using git I am asked for my SSH key passphrase. However, I’ve ensured the SSH agent is running. I can pull from the remote, push to the remote, and do anything with the remote repository without needing to insert my SSH key multiple times. However, when I try to commit via git, I’m asked for my passphrase on every commit. Why is this a problem? Because when I rebase a 150+ branch of commits I’m having to enter my passphrase 150 times consecutively. That’s unsustainable.
Still, even armed with Christopher’s information, I couldn’t figure out how to fix the problem I as running into.
Since this post is meant to help guide me in setting this up from scratch (AGAIN) I’m going to go through all the relevant steps. But if you are just looking for the solution to the problem then skip to here.
Setup
Everyone’s setup is a bit different, but I’m going to document the relevant portions of mine.
Environment
Clean Install of Windows 11
Git Note: When installing I select the “Use Git and optional Unix tools from the Command Prompt”
After my environment is setup, I set up SSH. This involved creating a new key, adding it to the SSH Agent, and making sure the agent was running. I run all these commands from an instance of Powershell Core (installed above).
Adding the key to the agent is simple enough. You just need to run a command and give it the location of the private SSH key you generated in the step above. In my case that’s as simple as:
Now I would either typically restart my computer, or close the current PowerShell Core window and open a new one.
Adding the SSH Key to GitHub
If you want to sign commits with GitHub then you need to upload your SSH key 2wice (that’s a clever way of writing twice). One upload for your “Authentication Key” which will be for access to the repo. The second upload for your Signing Key. You can do both at https://github.com/settings/keys just click “Add Key” in the SSH Keys section and upload one for Authentication and then click “Add Key” again and upload one for Signing.
Signing Git Commits
Once again, referring to Christopher’s post. I do the following, making sure to switch out the path to my pub SSH key:
Ok. Now the problem. You’ve successfully connected to Github and cloned your repo. Now you start making commits and find that you have to insert your SSH key passphrase over and over, despite the SSH agent running.
It took me a long time to figure out, but the problem is likely that you have two versions of SSH on your machine. The one that comes with Windows by default, and the one that came with Git for Windows when it was installed.
To solve this problem, tell Git For Windows to “Use external OpenSSH” when installing Git. The following comes from this answer on Stack Overflow by Ajedi32.
If you used the bundled OpenSSH installation, git will default to that and will not use any keys imported into Windows’ native ssh-agent service. You need to select “Use external OpenSSH” instead when prompted.
If you did not make this selection when installing, you should be able to fix that by just running the installer again.
Previous Versions of this article said the following. This is no longer what I recommend.
You’ll remember we selected the “Use Git and optional Unix tools from the Command Prompt” option. This option will add the referenced Git and Unix tools to your systems Path. The SSH Agent that is registered and used for Authentication is the one that comes installed with windows. The one that Git is using for signing, is the one that comes with git.
The order of the PATH variables will determine which version of the ssh-agent is used. The first version encountered will be the one that windows uses. So, to solve this problem, make sure the C:\Program Files\Git\usr\bin is above the %SYSTEMROOT%\System32\OpenSSH\ in your system path.
I am currently looking into alternative forms of user authorization and authentication. Among the solutions I am looking into is Clerk. It’s pretty sweet and has a lot of cool features baked in. (Although the MFA support is a premium add-on and they aren’t super upfront about that).
One of the issues I ran into while implementing Clerk was a redirect loop. I set my home page within the Clerk Dashboard and when I reloaded my app, boom REDIRECTION FOR DAYS. Clerk was continuing to reload the home page for all eternity.
So, I added an onbeforeunload event into the page with a debugger call inside of it. This paused the page in the inspector before it reloaded and allowed me to actually see what was going on.
It turns out that Clerk was outputting an error message into the console. This error message is pictured below:
The <SignUp/> and <SignIn/> components cannot render when a user is already signed in, unless the application allows multiple sessions. Since a user is signed in and this application only allows a single session, Clerk is redirecting to the Home URL instead. (This notice only appears in development)
? Clerk
Well – ok then. Clerk is redirecting to the Home URL (which is the one it’s already on) and causing a permanent redirect loop. It seems like this would be handled better by simply _not_ loading the SignIn or SignUp components should the conditions for their existence fail.
Hopefully this helps you out! You might consider making your home page and your sign in pages different pages, or conditionally load those components, so that Clerk can be happy and not mess things up.
That’s a good question. You might ask this when you are implementing a new project and trying to figure out where to store your secret configuration values.
Before getting into the difference, let’s state the similarities:
Both allow storage of environment specific configuration values and secrets from outside of your codebase.
Both allow retrieval of environment specific configuration values and secrets from outside your codebase.
Both files use a key value pair type format.
Both allow comments within the files.
So what about the differences?
The names are different .ini vs .env
The method of storing the data is different.
The method of retrieving the data is different.
The syntax is different.
That probably doesn’t answer your question though. You likely aren’t interested as much in what the differences are as you are in why you would use one over the other.
Why use .env instead of .ini or .ini instead of .env?
This is likely what you are wanting to know, why would you use one over the other. So let’s list some pro’s and cons of each format to help make the decision.
.env Pros
language-agnostic
allows referencing other .env values as variables
libraries like phpdotenv provide robust functionality
Let’s expand a little on the language-agnostic pro above.
First Scenario. You are coming to PHP from another environment, like Ruby, you are probably wondering where to put your secrets and configuration values for your code. A .env file just makes sense, these are supported in Ruby so there must be a way to add them in PHP. Voila! There is a package that supports .env files that you can easily include in your PHP code and you don’t have to think anymore about it. It probably doesn’t even occur to you to use the build in .ini support.
Second Scenario. You are using both server side JS, like Node, and PHP. You want to share configuration values between the two systems. Both support .env files allowing you to easily share your secrets between both languages.
.ini Pros
built-in PHP Support
allows grouping of values via sections
supports typed values via INI_SCANNER_TYPED
allows interpolating other config values and environment variables
.env Cons
requires third party library and composer to work
features provided change based on library used for loading
*some libraries load configuration secrets into globally accessible variables like $_SERVER or $_ENV which will could expose your secrets to code you don’t intend to
.ini Cons
requires familiarity with PHP
Which one should I use?
I’m going to give you the answer you don’t want to hear. That’s up to you. It really depends on the needs of your environment and your program.
As far as my personal preferences go. I don’t like including unnecessary dependencies in my codebase. Therefore, if I’m using PHP to develop my app, and I have no need to share my config secrets with other languages, then I would use an .ini file and the built in support for those that PHP provides.
Ok, let’s cut to the chase. Everyone knows that converting a JavaScript Map to JSON does not work out of the box. By default, the JSON.stringify method will return a {} when converting a Map. It’s super helpful, because it returns an empty object, which is usually precisely what we don’t want. That’s a wee bit frustrating!
How Not to Serialize a Map to JSON
The internet has provided us with the silver bullet, though, right? Object.entries is great and can do all things, right? WRONG. That is wrong and you are wrong if you think that it’s right. You see what I’m getting at?
The existing advice is to use Object.fromEntries and Object.entries in tandem to make it easy to convert a map to and from JSON. The code below shows you this method in all it’s glory.
// ### inspired code below
const mapToSerialize = new Map([[1, 'first key']])
const serializedMap = JSON.stringify(Object.fromEntries(mapToSerialize ))
// now deserialize
const deserializedMap = new Map(Object.entries(JSON.parse(serializedMap)))
Isn’t the code above beautiful? Doesn’t it just wonderfully handle our serialization in the best of ways? No! It doesn’t. Let’s compare the original map to the deserialized map and see how amazing the code works.
The original map looks like this when logged to the console Map(1) {1 => 'first key'}. The deserialized map looks like this when logged to the console: Map(1) {'1' => 'first key'}. Did you spot the difference?
Just to make things clear, let’s try to delete the keyed value out of our Map.
So really, in the end, our serialization using the currently advised method totally works. We serialize our map, deserialize it, and then can’t use it properly. Just as we expected.
NO. It doesn’t work correctly. The difference is that the original map uses an integer key, and the deserialized map uses a string key. When we attempt to delete the value out of the deserialized map, using the key, it fails because the type doesn’t match.
How to Serialize a Map to JSON
I am proposing a different way to serialize a map to JSON.
Create a new array from the existing map
Serialize the array to JSON
Create a new map from the deserialized JSON
The code I propose looks like this. You might not want to keep the logging at the end in your own implementations.
/// ### ugly code below
const mapToSerialize = new Map([[1, 'first key']])
// convert the map to JSON
const arrayToSerialize = []
mapToSerialize.forEach((value, key) => arrayToSerialize.push([key, value]))
const serializedMap = JSON.stringify(arrayToSerialize)
// convert the JSON back to a map
const deserializedMap = new Map(JSON.parse(serializedMap))
// log to console for inspection
console.log(mapToSerialize)
console.log(deserializedMap)
This code correctly works to serialize and deserialize a map to/from JSON.
In Conclusion
Serializing a Map to and from JSON isn’t as simple as it looks on the outside. Especially if from the outside it looks like it would be super simple. The Object.entries silver bullet only works for Maps with string keys, and attempting to use it for Maps with integer keys will end up with you hating your life. You probably will start listening to Barry Manilow constantly. You might even start eating Quinoa. I’m sure it happens all of the time due to Map serialization woes.
Anyways, if you prefer to not eat Quinoa, then go ahead and convert your Maps to and from JSON the way I suggested above.
At the end of George Orwell’s 1984, Winston Smith is sitting in the Chestnut Tree Cafe. Upon hearing of the victory in Africa we are told he enters a dreamlike state. He has finally defeated himself. He loves Big Brother.
That’s it, right? That’s the end of the story. Or is it?
Join me for a moment and think through the last chapters of the book, if you will.
Did Winston Smith die?
There are a number of theories about whether Winston died or not. Some say he was literally shot in the back of the head. Some believe that the person who was Winston Smith ceased to exist, the death being metaphorical.
Either way, the end is ambiguous, correct? Either Winston physically died, or in admitting a love of Big Brother, he metaphorically died.
I don’t believe either is true. Winston Smith is alive at the end of 1984.
Why is Winston Smith alive?
Winston is alive both physically and mentally.
First, his physical death is described from within a dreamlike state. He is blissfully dreaming of going to the Ministry of Truth and confessing everything and incriminating everyone. He describes a long-hoped-for bullet entering his brain. However, this isn’t actually a description of his physical death, just a part of his dream.
Later, we’ll get to why Winston is alive mentally.
What is Winston’s Plan?
Above we see that Winston Smith perceives that the only way to keep a secret is if one keeps it from oneself. You must always know that it is there but never let it emerge into your consciousness.
After this thought Winston hatches a plan. He believes that one day the Party would decide to shoot him. It was in that instant that Winston would reveal his secret, his hatred of the Party, his hatred of Big Brother.
You can see Winston’s plan. He was going to die hating them. Because, as he says, “To die hating them, that was freedom.”
From that moment forward Winston was resolute. He shut his eyes and committed to this purpose.
Why is Winston Smith mentally alive?
Now we come to the crux of the issue. We know that Winston is not physically dead. But why is it that we know he is mentally alive?
Think of his plan. “To die hating them, that was freedom.”. Now, think of how he was going to accomplish that. He must keep secret his hatred of them and of Big Brother. He must keep this secret from the Party and from himself. And yet, while keeping this secret he must also know that it is there.
How can we know that Winston Smith is mentally alive? It is in the way that he describes the bullet that we know he is mentally alive. Winston is surely successful in convincing himself and the Party of all else. He loves the Party, and in the end he is able to convince himself of the hardest thing, he even loves Big Brother.
Or does he?
He does not. Winston knows of his secret hate of Big Brother, and we know of it too. He has succeeded in keeping this secret from the Party and himself.
However, he betrays his knowledge of this secret in the way he describes the bullet that would spell his doom. He describes the bullet as “long-hoped-for”.
A long-hoped-for bullet.
Why hope for a bullet? Why does Winston Smith describe the bullet this way in the end of the book?
He hopes for a bullet because to him it represents his freedom. The bullet is his plan. He told us this.
This is why he hopes for the bullet. This is why he describes the bullet as long-hoped-for. His rebellion is not quashed, his mind is not gone. No, he has simply succeeded in his plan. He loves Big Brother, and his hatred is known to but secret from him. The long-hoped-for bullet will be his freedom.
However, for the sake of this answer I will lay out Winston’s plan.
First some information:
Winston perceives the only way to keep a secret is if it is a secret to everyone, including yourself. You must have knowledge of this secret, but it should never enter your consciousness.
Winston concludes that the only way to have freedom was to die hating them (them being the Party and Big Brother).
Winston believes that at some point in the future he will be shot and a bullet will enter the back of his head.
Winston’s Plan
He will keep his hatred of the Party and Big Brother a secret, even to himself.
He will be shot with a bullet in the back of the head.
The instant he is shot, he will let loose his secret hatred of the Party and of Big Brother. In this way he will be free, because “To die hating them, that was freedom.”
Winston knows of this secret, however, and we know to. He betrays his knowledge of this secret in the slightest by describing that bullet, which was to enter the back of his brain, as “long-hoped-for”. After all, why would one hope for a bullet?
He hopes for the bullet, because the bullet is his escape, the bullet is the penultimate step in his plan. The bullet represents his freedom. It is then that he will die with dignity, his rebellion complete, finally unveiling the secret he is keeping from them, and from himself. He hates them and he hates Big Brother.
I just ran into a problem and I wanted to document it for myself and for anyone else who might have issues. First I describe the problem, then I give the solution. Scroll down if you’re looking for the solution.
The Problem
After posting with JavaScript fetch I did not see the “body” arguments come through on the server. The method of the fetch was set to POST and the body included was an instance of FormData. According to the documentation on MDN, everything should’ve worked. So why wasn’t it working?
The Basic Client Side Code
const body = new FormData(myForm)
// assume myForm.action = "https://example.com/ajax/post"
const response = await fetch(myForm.action, {
method: "post",
body,
})
The Basic Server Side Code
<?php
// file: index.php within the ajax/post directory
// don't bother processing the post if there is none
if(empty($_POST)){
exit;
}
// ... processing code below
I spent some time debugging and without a doubt, every POST request to the index.php file did not have the $_POST array filled out. The POST array was empty as well as the REQUEST array, even the oft-touted file_get_contents('php://input') came up empty.
The Solution
You aren’t going to like it. I don’t like it. The solution to this problem is so annoying that you’ll just facepalm like Picard.
Add a slash to the end of the url you are posting to.
The problem url is: https://example.com/ajax/post The working url is: https://example.com/ajax/post/
Currently, when this url is posted to, the server responds with a 301 Redirect before the index.php file is hit. But why? The problem is that you do not have a trailing slash in your url. That’s it. You are posting to an index.php file within a directory, but your url does not have a trailing slash. So your server helpfully redirects you to a url with a trailing slash, and you lose your posted information along the way.
Yep, that’s it. Add a trailing slash and you’ll see your body come through when debugging.
Do you hate seeing a sliver of light between your monitors? I sure do. I’ve spent a lot of time getting my monitors lined up just right so that they are flush against eachother with no gaps. However, a slight bump of the desk can throw all that hard work out the window.
So, I figured out another solution. Instead of adjusting the monitors to make sure that they are “perfect” I took some black electrical tape and ran it down the seem behind the two monitors. The black tape matches the black border of my monitors and it blocks any light from coming in!
I highly suggest running black electrical tape down the back seem between your monitors. It really helps!
As you might know, I’ve set up a RocketChat server recently on Digital Ocean. So far it’s been working great. An update every once and a while is all it needs.
However, yesterday, I attempted an update that failed. From then on every attempted update resulted in “RocketChat server not running. Cancelling”. This was very frutrating.
First, a few commands to try that might help:
systemctl restart rocketchat.service – This will start your RocketChat server in case it is stopped.
systemctl status rocketchat.service – Use this command to check the results of the previous command. Typically this will report that the service is “Active” if the previous command was successful.
In my case, the second command resulted in a “failed” state. The command itself gave me some information as to what the failure was, but not a lot of context as to what caused the failure. However, it did show me the process that it attempted to run. It said, ExecStart=/opt/nvm/versions/node/v14.19.3/bin/node /opt/Rocket.Chat/main.js (code=exited, status=1/FAILURE).
Alright! We’re getting somewhere. With that I was able to figure out what command failed and where that command was run. I navigated directly to the /opt/Rocket.Chat directory which was where the failure was occurring. From here I ran node main.js. The results of this command were much more helpful. They told me this, Error: Cannot find module '@meteorjs/reify/lib/runtime'. That looks like an issue with npm dependencies.
So, I poked around the Rocket.Chat directory structure and looked for dependencies for the Rocket.Chat server. I found what I was looking for in the /opt/Rocket.Chat/programs/server directory.
From this directory I ran two commands
npm install
npm ci
Afterwards I attempted to start the RocketChat server again using the systemctl restart rocketchat.service command. I checked it with systemctl status rocketchat.service and found that it was working now! RocketChat was back to running normally. The problem with “RocketChat server not running. Cancelling” was gone!
Yesterday I took my son to SmileKeepers Medford Childrens dentist office.
I had to fight with the dental staff just to be allowed in the room where the work would be done. They suggested I wait in the car. I pushed to the point where I actually met with the dentist before the appointment. The dentist said to me that he’d been doing this for 40 years and he didn’t need my help “back there”. However, I was prepared to leave if my request was not met. So, after much pushing on my part they allowed me back in the room with my son. They said, that if I was going to come back they would just have to do less work on his mouth than they had planned. Note: This has never been a problem with any other dentist. I’ve been in the room with my kids numerous times and my kids draw comfort and courage from knowing that I’m in the room.
I sat in the corner of the room, as I normally do at the dentist, and the staff worked with my boy. My boy was great, he let the assistants work on his mouth, easily took the laughing gas, and was following directions well. Then the dentist came in and began to work. It wasn’t long before my son was silently writhing in pain, trying to tough it out. My son then began to tell the dentist that what he was doing hurt. The dentist would stop working on my sons mouth and ask “does it hurt?”, my son would answer “Yes”, then dentist would say, “I’m not doing anything right now, so it can’t be hurting” and then would continue working. My son was telling the dentist continually that it was hurting. However, the dentist was not listening to my boys concerns when my son said he was hurting. Instead the dentist was arguing with my son about whether he hurt or not. He was demeaning and was not treating my son with respect, he wasn’t treating my boy the way I would expect a person to be treated. He exasperated my son. There was nothing I could do, I wasn’t just going to leave the office with my boy bleeding from his mouth and try to rush him to another dentist. I also wasn’t going to risk telling this man who’d been doing this for 40 years how to do his job. Who knows if he’s the kind of person to take “revenge”. I had to let the man finish what he was doing as quickly as possible.
Now, anyone who knows him knows my son is tough. When my son says something hurts, he is telling the truth and I believe him. That said, I have never seen any one of my six kids like I saw my boy yesterday. I listened as my son yelled “No, no, no!” and the dentist ignored him to continue “working”. I sat there in silent rage as my boy screamed and writhed in pain, and instead of validating my sons concerns, the dentist argued with him and tried to prove him wrong.
What’s more is that when the appointment was over the dentist turned to me and said “this is why we don’t allow parents back in the room” as if it was my fault.
Now, I’ve taken the day to let my feelings subside and to think about this logically…
I am livid. There is a word that describes what happened to my son yesterday, abuse. I don’t even want to think of the additional “work” they would have done had I elected to wait in the car as they’d suggested. Anyone who knows us knows that we will push our kids through their tears when necessary. This was not one of those times.
Dr. Zirkle from SmileKeepers Children in Medford Oregon is the dentists name. He is not a safe dentist. I strongly recommend keeping your distance from him. We’ve pulled all 5 of our kids who are patients out of that office and we won’t be using another “SmileKeepers” or affiliated brand of Interdent dentist office (Gentle Dental) ever again. The fact that Dr. Zirkle is allowed to continue to represent Interdent is indicative of the bad decisions Interdent makes. A representative like Dr. Zirkle reflects very poorly on the Interdent company as a whole.
Other Parents Experiences
Other reviews are just as horrific. Apparently this has been happening for YEARS.