Streaming upload to S3

Here’s a short recipe of how to transmit files from an external source to an S3 bucket, without downloading the whole source and hence unnecessarily allocating memory:

It’s taking advantage of request’s stream capability.

Even with files over 2 GB in size, the Lambda container consumed only about 120 MB of memory. Pretty sweet. Of course, this approach is applicable to any platform, not just Lambda.

Advertisements

Redshift workload management basics

Configuring workload management (WLM) for a Redshift cluster is one of the most impactful things you can do to improve the overall performance of your queries.

The goal is, roughly speaking, to have as less slots per queue as possible with as less — ideally none — wait time in each queue as possible. This will ensure that queries have the most amount of memory available (which helps with query execution speed as intermediate results don’t have to be written to disk) while, at the same time, they execute immediately.

There’s no golden rule on how to configure WLM queues, as it is really use-case specifics. I recommend starting very simple. By default, there’s a single queue with concurrency level of 5. This is, most probably, insufficient — queries won’t be executed immediately, but will be waiting for a slot to free up. Increase it (say, to 15) and monitor the wait time over the next few days.

You can use the v_check_wlm_query_trend_hourly admin view from the tremendously useful amazon-redshift-utils and plot it on a graph.

wlm-execution-and-wait-times-main-q

You are only interested in those with a service_class > 5 as first five are internal and you cannot change their configuration.

In the graph above you can see that there’s pretty much no wait time on the queue, which is a good thing. In such a case you can experiment with reducing the concurrency level to increase the memory-per-slot of a queue. Use this query to inspect the memory allocation and concurrency level of your queues:

SELECT service_class, query_working_mem as mem_mb_per_slot, num_query_tasks as concurrency_level
FROM stv_wlm_service_class_config
WHERE service_class > 5
ORDER BY 1;

Finally, make sure to set a query timeout (maximum time it can run) on your WLM queues. A runaway query can bring your cluster to a halt.

Figuring out the sweet spot for your WLM setup takes a while and you should revisit it regularly as your system evolves. The great thing about changing WLM config is that tweaking the properties of a queue does not require a cluster reboot so you won’t disrupt the work of your colleagues by experimenting with the setup.

There is a lot of fine-grained parameters you can adjust and tons more to learn about WLM (my favourite gem is wlm_query_slot_count). Yet already a very basic setup will help with the overall cluster performance. It is absolutely worth the effort to understand and implement WLM.

Overseeing a long running process with Lambda and Step Functions

In my day job, we’re using Lambda and Step Functions to create data processing pipelines. This combo works great for a lot of our use cases. However for some specific long running tasks (e.g. web scrapers), we “outsource” the computing from Lambda to Fargate.

This poses an issue – how to plug that part of the pipeline to the Step Function orchestrating it. Using an Activity does not work when the processing is distributed among multiple workers.

A solution I came up with is creating a gatekeeper loop in the Step Function to oversee the progress of the workers by a Lambda function. This is how in looks:

gatekeeper-step-function

The gatekeeper function (triggered by the GatekeeperState) checks, if external workers have finished yet. This can be done by waiting until an SQS queue is empty, counting the number of objects in an S3 bucket or any other way indicating that the processing can move onto the next state.

If the processing is not done yet, the gatekeeper function raises a NotReadyError. This is caught by the Retry block in the Step Function, pausing the execution of a certain period of time, as defined by its parameters. Afterwards, the gatekeeper is called again.

Eventually, if the work is not done even after MaxAttempts retries, the ForceGatekeeperState is triggered. It adds a "force: true" parameter to the invocation event and calls the gatekeeper right back again. Notice that the gatekeeper function checks for this force parameter as the very first thing when executed. Since it’s present from the ForceGatekeeperState, it returns immediately and the Step Function moves on to the DoneState.

For our use case, it was better to have partial results than no results at all. That’s why the ForceGatekeeperState is present. You can also leave it out altogether and have the Step Function execution fail after MaxAttempt retries of the gatekeeper.

A better way to package Python functions for AWS Lambda

The default way of creating a zip package that’s to be deployed to AWS Lambda is to place everything – your source code and any libraries you are using – in the service root directory and compress it. I don’t like this approach as, due to the flat hierarchy it can lead to naming conflicts, it is harder to manage packaging of isolated functions and it creates a mess in the source directory.

What I do instead is install all dependencies into a lib directory (which is as simple as pip install -r requirements.txt -t lib step in the deployment pipeline) and set the PYTHONPATH environment variable to /var/runtime:/var/task/lib when deploying the Lambda functions.

This works because the zip package is extracted into /var/task in the Lambda container. While it might seem as an unstable solution, I’ve been using this for over a year now without any problems.

AWS Lambda deployment pipeline

TL;DR: I’m open-sourcing a continuous deployment pipeline built for AWS to automate the process of creating and deploying AWS Lambda functions and related infrastructure.

Because of my tinkering with Alexa, I wanted to have an automated way of deploying a new version of a Lambda function just via git push. Doing it manually is cumbersome. As of late, AWS offers all the tools necessary to do so. Their Code* family of services (CodeCommit, CodeBuild & CodePipeline) are the perfect building blocks to set up this process.

Furthermore, I also wanted to automate the necessary infrastructure and treat it as code. That’s where CloudFormation comes in. I didn’t have any prior knowledge of CloudFormation, so it was a great learning experience. I used this excellent template as a start point and I want to thank to the guys over at Cloudonaut for publishing it. Still, it took me a lot of time to grasp all the concepts of CFN and I went through a lot of trial-and-error to figure out how everything ties in together.

In the end, I’m very happy with the result. This initial version is quite basic, but it works well. What makes it cool is that the pipeline is self-referencing, so any changes you make to it get automatically applied. You can read the details about how it works in the README.

I will be expanding its functionality, feel free to star the repo on GitHub and follow along.

Plain text input of passwords on mobile

Writing on a mobile device, whether it’s a smartphone or a tablet, sucks. Because of the keyboard size and lack of physical feel, it’s just so easy to get it wrong. The situation gets worse when inputting passwords. By concealing input, one can’t check for typing errors.

I’ve long been a proponent of just showing the password field in plain text on mobile devices. There are multiple ways to go about it. You can have a toggle switch, a button that reveals the input for a limited time or possibly automatically show plain text after first unsuccessful login.

The obvious concern is that of security, but I don’t think this is an insecure way. It is much easier to conceal a display of a mobile device from prying eyes. Furthermore, this approach leads to a higher success rate so there’s no need to type a password multiple times which would present more opportunity to steal it (I’ve seen people who actually whisper their passwords when typing them).

So even though I think it’s good UX I unfortunately haven’t been able to convince anyone whom I’ve been building apps for to do this nor have I seen it in out in the wild. That is, until now.

I recently signed up for Mega. Their iOS client has this exact feature on the login screen:

ImageImage

I have to say the execution of it is not perfect (at first, I was confused with the actual meaning of the switch and since I didn’t write anything to the password field yet, it didn’t), but it could be easily enhanced. I would like to see more apps adopting this pattern for password input, making it more user-friendly, less error prone and also secure.

MCEModelEditingProxy

I’ve recently open-sourced a small but handy Objective-C library called MCEModelEditingProxy.

Often times when presenting values from a model you want to make them editable but don’t want to store the changes back immediately, but only after a confirmation from the user (e.g. pressing a Save button). MCEModelEditingProxy does precisely that. It stands as a transparent layer between your model and your controller, intercepting writes to the model.

If this sounds too abstract, check out the README in the project repo where you can find example use cases. I hope you’ll find the library useful and include it in your projects too.