My current project at Homesnap involves breaking a monolith application into multiple microservices and part of that is moving a large amount of data to that new microservice by way of feeding the data through our new API.

While I’m able to get reasonable throughput by firehosing the API with data from my machine, it simply doesn’t have enough throughput to move the volume of data necessary in any reasonable amount of time, so I decided to queue the data using Amazon’s Simple Queue Service (SQS) with a Lambda function being triggered from data being written to the Queue. In theory, this allows me to scale out the data ingestion using Amazon’s capacity rather than me finding more and more machines to run my import utility on.

What I found, however, was that Lambda, with 80 concurrent executions sending 10 requests per batch, could barely outperform my single machine. My setup was pretty basic: a stateless, HTTP-based API behind an AWS Application Load Balancer (ALB); I would expect Lambda would achieve linear growth until the database’s resources were exhausted. So I began tinkering until I got the performance I was expecting.

1. Sticky Sessions

While I wouldn’t expect sticky sessions to be necessary (since my application is completely stateless), I wasn’t able to get reasonable performance from my machine or Lambda to my API through ALB without sticky sessions enabled. While this improved performance significantly on my machine, Lambda still suffered, so I kept digging.

2. Get node http to use sticky sessions

My Lambda function was just a simple node function that was using the node http functionality to send the requests to the API. However, unlike client-side javascript using the fetch API, the http library doesn’t automatically store / send cookies for the request. Thus, while sticky sessions were enabled, the Lambda function was never benefitting from those sticky sessions! Fixing this fact is fairly trivial, once you know what needs to be done.

When Lambda loads your node function, it executes your script once, but your exported handler function repeatedly until the process shuts down. We'll exploit this fact in the code snippet below.
const http = require('http');
// initialize our cookie to an empty array
let cookie = [];

function sendRequest(data) {
    return new Promise((resolve, reject) => {
        const request = http.request({
            host: 'example.com',
            path: '/api',
            method: 'POST',
            headers: {
                // Pass our cookies to the request
                "Cookie": cookie
            }
        }, (response) => {
            // when we receive a response, store the cookies returned from the server into
            // our cookie variable. note that a cookie is a semi-colon delimited string of properties
            // but the only part we want to send up to the server is the first part.
            let cookie = (response.headers["set-cookie"] || []).map(v => v.split(';')[0]);
        });

        request.end(data);
    });
}

exports.handler = async (event, context, callback) => {
    await sendRequest('{ "hello": "world" }');
};

The result? My function went from ~2,000 invocations per minute to ~9,500 invocations minute and the maximum duration of my function dropped from ~22 - 28 seconds to ~6 seconds. Additionally, I was able to reduce the number of docker images running in the cluster from 20 to 6 while sustaining the same throughput. All in all, everything is running faster with lower cost, which makes me happy.

Hopefully this can help you if you are in a similar situation.

Rob Conery has released a new book called The Imposter’s Handbook for those in the software industry who don’t have a strong background in computer science fundamentals. I haven’t read the book so I can’t comment on what it covers, but the concept has me reflecting on my own similar experience.

While I did attend some university, I only finished about half of my computer science degree before I dropped out. At the time, I didn’t think I had missed out on anything because I had already taught myself everything I had encountered in school up until that point; I was always a semester ahead of my course work.

I realize now that I was pretty close to the promised land of computer science because I had just hit Big-O notation though at the time I thought it was completely pointless. At the time, I was somewhat correct, but looking back now I realize how wrong I was.

This idea still resonates with many other self-taught developers though. The easiest place to see this is the criticism of technical interviews and their focus on rote data structure / algorithm questions. Often the counter-argument is “I don’t need to know this because I can just google it”, which is a fairly valid response.

So why, exactly, should you care about computer science if you are already a good developer? Well, you don’t, to an extent. You can be a succesful developer, at least for a while.

I was successful for years before I started diving deep into computer science on my own. I wasn’t some prodigy, I was just riding the effective application of computer science through my use of a DBMS. In fact, a large number of developers can, and are, successful because those fundamentals are baked into the frameworks that they are using.

At some point, though, you will start hitting scaling problems as you continue to be successful. Often times, the scale may not be as much as you would like. Sometimes, it will happen because you finally land that big client.

The classic example of this is you have a component that has some sort of double loop in it that works great with all of your existing clients who have a hundred items of something, and then you get that client who has a thousand, perhaps even ten thousand items, and everything suddenly starts falling over.

This is why many companies focus on ensuring that developers have a good grasp of data structures, algorithms, and their space / time complexity analysis during the interview process; they have hit the point where they can’t scale further without doing things right. Furthermore, they can’t afford to have people writing code that, when released, will immediately fall over under load.

So, if you’ve made it this far, I now make an appeal to you, reader. If you don’t have a strong background in computer science, start today. You don’t need it, but it can only make things better. Now is the greatest time to do such a thing because there’s never been better access to that information. Whether you read a book, take an online course, or read blogs, there’s a multitude of information out there for a reasonable price.

Recently I was working on reducing some redundancy in an ASP.NET MVC application and ran into an error when trying to conditionally render a section. ASP.NET was not happy and gave me the following error:

The following sections have been defined but have not yet been rendered for the layout page “~/Views/Shared/_Layout.cshtml”: “ProductionOnlyScripts”.

On this layout page we have the following code.

@if (ApplicationConfiguration.IsProduction)
{
    @RenderSection("ProductionOnlyScripts", false)
}

I never expected that if I defined a section that MVC would require me to render it. I can kind of understand why, but I disagree with this design decision. Lets run down the options we have with sections in general.

  1. Calling @RenderSection(name) when the section has not been defined will throw an error. This is the right thing to do.
  2. Calling @RenderSection(name, required: false) when the section has not been defined will not throw an error. This is the right thing to do.
  3. Not calling @RenderSection(name) or @RenderSection(name, required: false) will throw an error if that section has been defined (as is what we have observed).

So, the work around to this is simple: we render the section to nothing.

@if (ApplicationConfiguration.IsProduction)
{
    @RenderSection("ProductionOnlyScripts", false)
}
else
{
    RenderSection("ProductionOnlyScripts", false)?.WriteTo(TextWriter.Null);
}

In the end, ASP.NET is tracking the section fragment and ensuring that we are writing out it’s contents. What it doesn’t know is that we are just writing that to /dev/null effectively so that the content will never make it’s way to the browser.

This can be extended as well into an extension method if you find yourself doing this often.

The only real mistake is the one from which we learn nothing.

John Powell

Let’s make one thing clear: being a candidate in an interview is hard. Over your career you will likely participate in many interviews as a candidate, and I can assure you that you likely won’t pass every single one. You aren’t alone here, I myself am no stranger to failed interviews. You can, however, make sure that you are getting something out of every single interview: a new job, exposure to a new problem, a new way to solve a problem, or a better understanding of a problem.

As I have been interviewing candidates recently for a software engineer position at DevResults, I’ve been thinking about what I would do if I had been the candidate in these interviews and I’ve compiled my current thoughts into a few general tips. Some of these are technical interview specific, but the majority are generally applicable to all interviews in general.

Failing a problem is an opportunity to learn

One interview I participated in I was asked to return all permutations of a string. At the time, I had never had to do something like this, and needless to say I didn’t do well on it. After the interview I took the time to really understand the problem and strategies for generating permutations in general, mainly to ensure I wouldn’t ever fail that question again.

Fast-foward to last year, I found myself in a scenario where I did in fact need to generate all possible permutations for a set of data. It made me really appreciate that I had been exposed to that problem in the past from the interview and it was pretty trivial to apply what I had learned to solve the problem.

Most of the time, however, you’ll probably see something you’ve encountered before, or some variation of a known problem. In scenarios like these, there’s often multiple ways to approach a problem and if you didn’t quite get it, the interviewer may explain the solution to you or you can ask what the solution is. If you are able to get the solution to the problem, definitely take the time later to make sure you understand the solution and why it’s the right solution.

If you actually got the solution, but in a non-optimal way (perhaps your solution had poor runtime performance or poor memory usage), be sure to note the problems your algorithm had and research solutions that address those issues.

Ask for feedback

Don’t hesitate to ask for feedback or suggestions on how you did in an interview, but keep your question focussed on your performance. A lot of the time, you won’t get a response for various reasons like company policy prevents the interviewer from giving any feedback to you. Any feedback you are able to get, however, can be very important.

Every interview I conduct I block off an hour and a half which is broken into three parts:

  • The first 10 - 15 minutes I go over the job description and get to know a little more about the candidate
  • The next 45 minutes is dedicated to the candidate answering technical questions
  • The remaining 30 minutes is open question time for the candidate

The vast majority of the time candidates spend about 10 - 15 minutes asking the usual questions like what kind of source control we use, how we do deployments, etc. The other day, howerver, one candidate took full advantage of this time, which I always explicitly say is open Q&A. The candidate asked for feedback on their resume, how well they were communicating while they were coding, how well I could understand them (they were a non-native english speaker), and many other questions.

The insight was extremely valuable for them personally and likely not something any company policy would bar. Before and after each interview, think critically about what things you struggle with and try to get feedback from the interviewer on how they think you did in those regards. This will help you do better in all subsequent interviews.

The only way to get better at interviewing is to do interviews

This seems like common sense advice, but it really is true. If you are having a hard time with interviews, you just need to do more of them. Generally, people only apply to companies or jobs that they are really interested in and waiting to apply to other positions that they would likely be happy with but definitely not their first choice. This is fine, but it can be really demotivating when you don’t make the cut for positions you are really interested in.

When I’m job searching, I tend to take a balanced approach and apply to both types of positions, which allows me to practice interviewing and reduce risk of failing an interview with someplace I really want to work at just because my nerves got to me. It would be unethical to waste someones time by applying to positions you wouldn’t ever want though, so only apply to jobs you can see yourself being happy in.

The last time I went through the hiring process it actually became a little complicated in the end because I ended up facing a situation where I would have to choose between a job I thought I would really like and a position that when I had originally applied I thought I could be happy with but I knew I really wanted after I had met the people involved and learned more about the mission / vision.

Out of the box, posh-git uses a red color that's a bit hard to read against most background colors. All hope is not lost, because it's very easy to fix.

The console on windows has some named colors built-in and posh-git is configured to use DarkRed. While we can’t change the named color that PoshGit uses, we can change the color that the console displays. To do this, simply right-click on the powershell icon, then select properties from the drop down menu.

It should be pretty clear which color in the row of colors is dark red, but if it isn’t then it is the fifth color from the left.

Once you select the dark red color, you can the selected color values to a more humane color. As can be seen in the screenshot, I’ve chosen a salmon color (R: 255, G: 154, B: 154) which goes well with my black color. Now, likely you’re still on Screen Background and the console background has changed to a salmon color, so just remember to re-select the previous background color before you hit Ok.

That’s it. Now hopefully you’ll have a more humane posh-git experience.