My PySpark Essential Training course just launched on LinkedIn Learning!

I’m excited to share that my new course on PySpark Essential Training is now live on LinkedIn Learning!

I was approached by the LinkedIn Learning team after they found some of my Python courses on YouTube, and we brainstormed several ideas for topics together. While I hadn’t used PySpark in a production setting, I was really excited to dive into it and develop a new course for them! This is the first time I’ve created an on-demand course that actually pays royalties based on streams, which is a lot more scalable than the quarterly live trainings I’ve been hosting for O’Reilly.

In the LinkedIn Learning course, I provide a structured and hands-on introduction to PySpark – perfect for data engineers, analysts, and anyone looking to scale their data processing skills. The course covers:

  • The core concepts of Spark and PySpark.
  • Installing PySpark, loading, manipulating, and analyzing large datasets in a notebook environment.
  • How PySpark fits into a wider data engineering ecosystem.
  • Best practices about executing PySpark in a production environment.

You can stream the course here with a LinkedIn Learning subscription – and feel free to like and bookmark the course to provide feedback!

The key to building a high-performing data team is structured onboarding

As I’m reflecting on the past two years on the last day of my job at Collectors, as well as the years before that at other organizations, I’m seeing a common thread between what worked and what didn’t work to build high performing data teams. But… let me rewind a little.

When I started at Flatiron Health in early 2014, the entire company was around 25 people, around half of them software and data engineers. When I left the company five years later in 2019, we had grown to around 800 employees (give or take, I lost track at some point) – and despite the hyper-growth, we still had one of the most highly performing engineering teams I have gotten the chance to work with. In my next role, it took me a moment to realize what we were missing that made it feel like the engineers were able to hit the ground running and seamlessly integrate into an existing engineering team or form new teams without major bumps: structured onboarding. For me, structured onboarding consists of two things that are both equally important:

  1. The onboarding document
  2. The “bootcamp” phase

Using these two methods, I have onboarded members on my data teams over the past decade and gotten them to be productive and ready to go after just a week. There are several main benefits of structured onboarding, both for the new team member and the organization:

  1. It significantly reduces the amount of time for the new staff member to be productive.
  2. It reduces the amount of “asking around” the new hire has to do.
  3. It makes them feel welcome – arriving to a prepped and personalized welcome doc shows respect and care for the new hire, and sets the tone for excellence.
  4. In addition to speeding up the time to being productive, it also sets the tone for “how we do things here”, which allows every member of the team to operate quickly and efficiently without wondering (or debating) about workflows or best practices.

The onboarding doc

The onboarding doc can be a simple Google doc that’s shared with the new hire on day 1. My onboarding docs look something like this:

  • A brief personalized intro paragraph and “what is the purpose of this document”. I’m serious, this sounds like a minor detail, but for someone who is brand new in a job and might have absolutely no idea what’s going on, even the smallest amount of help can make them feel more comfortable.
  • A list of key documents and links, such as:
    • Company-wide documents and links:
      • The company intranet page
      • Any “what does the company do” type document
      • Org charts
      • Learning & development, travel, holiday, reimbursement etc. policies
    • Team-specific documents and links:
      • The team mission statement and roadmap
      • Key data platform documentation
      • The JIRA board (or whatever project planning tool you’re using)
  • Who to meet:
    • Everyone on the team – I suggest short 15 minute meet & greets with everyone on the team (if size allows)
    • I usually pick a list of 5-10 people in different key roles at the company (e.g. product, software engineering, operations, IT, security…) to chat with. If you choose to do this, please prepare as follows so the experience is pleasant for everyone:
      • I confirm with those people up front that they’re happy to talk to new hires
      • I make a note for the new hire to message the person first, introduce themselves briefly, and ask if it’s okay to throw a quick meeting on the calendar. NO ONE likes “cold”
      • The doc also explicitly calls out that the new hire should come prepared with a quick intro and some basic questions (what’s your background, what do you do in your role, how do you use data in your role, etc.).
    • System access: This is one of the most crucial parts of the onboarding doc and will save everyone a lot of headaches.
      • This should just link to a separate, centrally maintained list that is kept up-to-date.
      • A list of every system and tool and how they can get access, e.g. email, GitHub, VPN, JIRA, databases, SAAS products, etc.
      • Obviously, a lot of these should have been granted to the hire before the start date.
      • I explicitly ask every new hire to add comments or suggestions to the central systems access doc to improve it or update outdated or incorrect information.
  • Onboarding plan:
    • Day 1: What the hire should be doing on day 1. That’s usually just getting access to various systems, joining Slack channels, and reading the “key documents”.
    • Week 1: This usually includes browsing the code base, exploring dashboards, running dev pipelines, and picking an “onboarding ticket” from a list of quick and easy tasks.
      • Great onboarding tasks allow the hire to make and test some simple code changes without needing to know too much about the code base. This could be a small refactor (renaming a variable), writing tests, or adding an existing data point to an output.

The “bootcamp” phase

The “bootcamp” is usually the first two weeks where a new hire learns about the company, the product, data, codebase, and best practices for how the team “does things”. These are usually taught in live sessions by the subject matter experts, which allows the new hire to ask questions and clarification. Obviously, the scope of a bootcamp depends on the size of the company and team, but it can include sessions such as:

  • Company mission statement
  • Who is who: org chart walkthrough
  • Product demos (get a product manager to do that!)
  • Data platform and dashboard demos
  • “Where does the data come from” walkthroughs
  • Developer workflows, code conventions, and best practices
  • How we give and receive feedback on our team

The bootcamp ensures that the new hire gets exposed to key information that’s required for them to navigate the organization. Additionally, it also makes them “drink the kool aid” and shows them how the team operates. This is one of the most crucial points for building a high-performing team: Instead of each new engineer learning from whoever is reviewing their code or helping them get something to work, which may often turn into a game of “telephone”, there is an actual document, slide deck, or some other written documentation of workflows and best practices that everyone knows about from the very beginning.

Bonus for managers: The 30/60/90 document

In addition to an onboarding doc, you may also want to put together a 30/60/90 document for a new hire that lists out expectations of what they should have accomplished at each milestone. There. areplenty of templates out there, so I’m just briefly sharing my personal preference here.

  1. Keep it super simple. One page is enough.
  2. At each milestone, write out what the engineer is expected to be doing with increasing complexity, ownership, and scope depending on their level of seniority: e.g. write SQL queries without help, review other people’s code, lead stakeholder meetings, plan out projects, etc.
  3. I generally split these between technical tasks, team internal tasks, and leadership skills.

Conclusion

Of course, great onboarding isn’t the only thing necessary to build a high performing team, but it’s almost impossible to build one without great onboarding. Depending on the size of your team and organization, onboarding may look different from what I’ve described here, but I hope this provides a good starting point for you. Whether you’re kicking off a brand new team as the first data hire or coming into a more established data team that doesn’t have an onboarding plan yet, spending just a few hours putting together a solid onboarding plan and refining it with every new hire will help set your team up for success.

I asked ChatGPT to write me some code to create a running playlist in Spotify… and it just worked?

I don’t really love running. I run marathons. These two things are true at the same time. I started running a lot (like, a lot) during the lockdown 2020 and somehow never stopped. While I do enjoy the actual act of running, the one thing that keeps me coming back to it is the music. My playlists. To me, running is just a different form of dancing – moving your body to music.

As I got more into training for actual races, I noticed that running to music at a certain bpm helped me keep up my cadence and made running a goal pace easier. Running cadence is basically the same as bpm for music, it describes how many steps you take per minute. A higher cadence is generally considered more favorable as it reduces the time spent on the ground, and quicker turnover can help you run faster. I generally aim for 175 bpm for an easy/medium run which gets me to a 9:30 pace with light effort, and faster bpm for faster runs. 

You probably know where this is going by now: I’m always looking for a good playlist with my target tempo. There are obviously a bunch on Spotify already, but they’re not necessarily the songs I like. While Spotify recently introduced auto-generated playlists at a certain bpm based on your liked artists, it seems to have a pretty wide margin for the tempo. Since I’ve been wanting to play around with auto-generated code by ChatGPT for a while, I figured I might as well tell the AI to write some code for me. And it went… ok? 

Here’s my first prompt, just to test the waters:

>> Can you write me some python code that gets a list of all my liked songs including their ID and bpm from Spotify?

Sure, I can help you with that. To get a list of all your liked songs including their ID and bpm from Spotify, you will need to use the Spotify API and the spotipy library for Python. Here is an example code that should work:

You will need to set up a Spotify developer account and create a new app to get the required credentials for the Spotify API. Then, replace the client_id, client_secret, and redirect_uri values in the SpotifyOAuth call with your own credentials. Finally, run the code and it should print a list of your liked songs with their IDs and BPMs.

That already looked pretty good, so I got a bit bolder and asked ChatGPT to actually give me the full code to add songs based on a certain bpm… and it delivered:

This looked like reasonable code to me. I was particularly surprised and somewhat bewildered that it automatically added the 5 bpm margin. Of course, most human developers would know that a buffer makes sense and ChatGPT just learned from existing code, but it almost felt like it was being smarter than what I asked it to do?

However, when I tried to run the code, I encountered two minor problems:

First, the example didn’t actually add the variables for the authentication credentials in the SpotifyOAuth call that it mentioned in the comment after the code. I had to do some googling to figure out how to set use these and decided to set the respective environment variables instead.

And second, the data model for the object returned by the Spotify API has changed, and the “tempo” now requires a separate API call. I found an example API call and asked ChatGPT explicitly to use the different method to get the “audio features” object:

>> can you write some code that uses the spotipy library to retrieve the audio features for a given track ID

I’ll skip the code snippet it gave me, but I ran the code separately and it ran just fine. Then I put everything together and it… just worked. Playlist created. Job done.

The final result

Overall, this took me maybe 15 minutes to put together, including figuring out authentication (I had already registered for a developer account before). Even though the code is fairly trivial, the biggest help was the API examples and the ability to ask questions about the API. It felt like working with a buddy who just happens to know that particular library and API pretty well and can you point the right way. I actually think I might use this in the future when working with unknown APIs. I’m decent enough at reading library and API docs, but using ChatGPT just really sped things up, even though it initially pointed me to an outdated version of the data model.

I don’t actually know how this would work for inexperienced developers that don’t have a good mental model of how the code should be structured at all and might have struggle with debugging error messages if the code doesn’t actually run. But I feel like for someone who’s written enough code to have an outline in their head and just needs some examples for things like API calls, this might be a great way to accelerate writing code.

The notebook with the final version of the code can be found on my GitHub. And yes, I did run the London marathon with (part of) that playlist!

Analytics From Day 1: How to Ensure Smooth Data Integrations with New Production Systems

Birds-eye view shot of roads

This is a pre-print of a post that will be published on the collectors.com tech blog.

We recently went through one of our most epic product launches at Collectors: Customers are now able to submit cards for grading to PSA, add them to their collection on the new collectors.com site, and choose to have their valuable cards stored in an actual physical vault. Oh, and all of this will be accessible via a single login, their “Collectors ID”, which replaces the multiple logins customers previously had to manage across our different products and business units like PSA and PCGS. While all these new features are integrated to provide a seamless experience to our customers, we’re dealing with a significantly more complex architecture of multiple new systems, databases, and APIs on the backend. Naturally, as with any new product launch, our product managers were keen to get analytics about the use of these features from day 1: How many users actually converted to the new “Collectors ID”? How many items have customers submitted to the Vault? Who is using the new collectors.com “My Collection” feature? 

In order to provide these kinds of data insights right from the go-live, we had to coordinate with several engineering teams to get our hands on the right data and integrate it into our data warehouse. In total, we ended up pulling data from systems sitting on top of four different production databases that were being launched at the same time. Considering how many different systems and databases we were working across, the integrations went pretty smoothly! Within a day of the go-live, I had produced a few dashboards with key metrics that the product and business stakeholders started using immediately to track uptake of the new services. The road to getting there wasn’t straightforward though and involved some amount of scrambling, knocking on different doors, and a few small surprises during the go-live. In this post, I will share some of my lessons learned from integrating with a new production system when you’re looking to provide analytics from the get-go.

Logistics

I’m big on keeping running docs with notes from my conversations and findings when working on a project – I always say I outsource my brain into a Google doc. Keep a doc with (datestamped) notes and “to do” items for every piece of information you find, open questions, as well as a list of who’s responsible for what on the product, e.g. product managers, engineering leads, project managers, etc.

In addition to the running notes, connect with the business stakeholders (product managers, analysts…) early on to document a set of desired metrics along with their priorities and timelines: What do we need to know from day 1? What can wait until some time after the launch? This will also be helpful when exploring the new data models to determine what is actually being captured and what data points may not be available to calculate the required metrics.

If there are standing meetings for the engineering team that’s responsible for the database setup, I strongly recommend regularly sitting in on those meetings. Even if you don’t always understand everything that’s going on, it’s helpful to have the context of what the team is focusing on, and establish a relationship with them. As data engineers, we’re often pretty removed from our counterparts on the data producer side, but knowing the people on the team (and having them know you) can be helpful in working together more effectively. 

Infrastructure

Once you know who your engineering point of contact is, the first question you’ll want to ask is: How do we get access to the data? Assuming we’re talking about data that lives in a relational database, here’s a short check list of information you need to get from the engineering team that’s responsible for the database setup:

  • Find out (and document) what cloud service the database is hosted in
  • Will you get access to a production database or a read-replica? And what permissions will you get, read-only, or will you be able to create temp tables or views if they’re needed by any of the tools in your pipeline?
  • Will there be dev and prod environments? What’s the timing for these being available?
  • How do users and services authenticate against the database? Do we need personal and/or service accounts to log in?
  • How will the logins will be shared? Will you need access to a shared password storage?
  • Do you need an SSH tunnel setup to access the database from any of the tools in your data stack?

It’s best to try and get all these details ironed out as early as possible, since especially tasks like setting up SSH tunnels can take some time. Make sure you can access the database as early as possible to avoid surprises later on, even if there is no meaningful data in there yet. 

Data model

Now that we’ve covered physical access to the data, let’s take a look at things to consider when you’re working with a new data model. I got looped into the production database design process early on and was able to provide input on the data modeling (see also: establishing a good connection with the upstream engineering teams! They’re your friends!). This ensured that the data would be suitable for our data extraction tool (Stitch) and contained all relevant data. Again, assuming you’re working with a relational database, here are some questions you’ll want to cover when talking about the data model:

  • Where is the data model documentation and how is it being kept up to date?
  • For any fields containing value sets, such as status codes, where are the corresponding descriptions stored? Will there be lookup tables in the database, or will these only be stored in code? The latter means you will need to be able to access the up-to-date list of lookups through your infrastructure, e.g. by querying an API (or simply reading the API documentation).
  • Will there be JSON columns? What is the schema for those?
  • What are the constraints on each table and column, e.g. foreign key relationships, NULL values, default values?
  • For datetime fields, will they be stored with timezone (they should)?

Application and data flow

Perhaps most importantly, when trying to make sense of data coming from a production base, we need to understand what the flow of the application is: What workflows (user-created or automated) in the application modify the data in what way? This is absolutely crucial to handling the data correctly and drawing the right conclusions from it. For example:

  • How and when is a record created, and what fields are populated through what input?
  • What workflows cause records to be modified in what way? And what metadata is there to track modifications, e.g. a “last updated” timestamp?
  • Will update timestamps for events such as status changes be tracked in separate fields? Or will there be kind of changelog table that captures these kinds of changes? This also trickles down into your data warehouse models, where you might need to start tracking status change dates right from the get-go.
  • How are deletions being handled? Will there be “hard deletes”, i.e. the record is simply removed, or “soft deletes”, i.e. the record has a “is deleted” or “deleted timestamp” field. And, along the same lines, is there a data retention policy that means data will be dropped or archived after a certain amount of time?
  • If the application is replacing a legacy application, will data be migrated? How do you recognize migrated data? Will there be any gaps or differences between migrated and newly create data?
  • Will there be realistic dummy data (i.e. data that adheres to the constraints and workflows described above) to develop our data models and metrics against?
  • Is there any chance of any test or dummy data getting into the production system? And if yes, how can we recognize and filter for it?

Ideally, your engineering and database admin teams will already have a “best practice” guide for designing new databases, which usually answers a lot of these questions. Otherwise, this might be a good time to start collecting these kinds of design decisions into a guide and encoding them in setup scripts where possible.

And finally…

I hope that this post has provided you with a starting point for a checklist for your next production data integration. All the questions I’ve covered in the above paragraphs should be treated as conversation prompts to elicit existing design decisions, or help influence decisions that are yet to be made. There will likely be some oversights (I have yet to work with *the* perfect production database), but coming prepared with a plan may help you catch some of the biggest issues to getting a good data integration early on. And even with the best preparation, you can probably expect to make some tweaks after the application go-live to adjust to some last-minute database changes or correct some assumptions you’ve made about the data. Developing against an empty data model or even dummy data can be challenging, and you might not nail everything at first try.

One last thing to keep in mind: As data consumers, our downstream use case will most likely be of lower priority than getting the production system stood up – and that’s totally okay. While I would love for data to always be a first-class citizen, I believe it’s pretty obvious that producing a stable production system needs to take priority, and we just need to accept that resource constrained engineering teams may move slower on supporting a data integration. This is why you’ll want to get started early and get these kinds of tasks and questions on the engineering team’s radar as soon as possible.

CC-licensed photo by Ian Beckley: https://www.pexels.com/photo/top-view-photography-of-roads-2440013/

Building a data platform from scratch at Collectors: A tale in three parts

I wrote an epic blog post series about my experience building a data platform from scratch in my new job, using the “Modern Data Stack” (well, at least parts of it). The post is an account of my first six months at Collectors building a data platform. It is part memoir, part instructional manual for data teams embarking on a “build a data platform” journey. I figured this might be relevant for some of y’all data engineering folks and/or “data teams of one”, so check it out here: Building a data platform from scratch at Collectors: Part 1 (parts 2 and 3 are linked from the post).

Image credit: “under construction” by Pedro Moura Pinheiro is marked with CC BY-NC-SA 2.0.

How to do Exploratory Data Analysis

This article provides an overview of (possible) steps to perform an exploratory data analysis (EDA) on a data set. These instructions are largely based on my own experience and may be incomplete or biased, I just figured this may be helpful since there’s not a lot of content out there.

The goal of EDA

The goal of exploratory data analysis is to get an idea of what a new data set we’re working with looks like. We’re mainly interested in aspects such as the size and “shape” of the data set, date ranges, update frequency, distribution of values in value sets, distribution of data over time, missing values and sparsely populated fields, the meaning of flags, as well as connections between multiple tables. This goes along with notes and documentation about the findings with the purpose of having a permanent reference for this particular dataset. Once EDA is complete, we should be able to add an integration for the data and create analyses more easily than if we started from scratch.

Tools for EDA

EDA can be done in different ways depending on which toolkit you’re most comfortable with. Generally, you can start out simply writing SQL in your SQL workbench (e.g. DataGrip) or start immediately with a notebook (e.g. Jupyter or Hex). At the time of writing this (February 2022), my data team mostly uses Hex notebooks for EDA, as they allow sharing and commenting on analyses fairly easily.

Some basic principles

  1. In an ideal world, we wouldn’t have to do EDA, but be working with well documented data and an accurate and up-to-date entity relationship diagram. Ideally, we’d also have this data “profiled”, i.e. have some form of documentation with some basic statistics. This isn’t the case very often for production databases, which is why we do in-depth EDA. However, if we can find documentation or code, we should use it during EDA to inform our assumptions and insights.
  2. When looking to integrate source data, make sure to query the actual source data table, rather than any modification (view, subset, extract) of it – if possible. This is to make sure we’re not looking at data that might already have some issues introduced through our code.
  3. Keep a “running monolog” in the notebook or SQL script. Yes, the code is usually self-explanatory, but it’s good to document your thought process so that other people can follow along more easily. Examples:
    1. “Here I’m just looking at the row count.”
    2. “Let’s join set_item on the set table to see if the IDs match. I see that there are no missing joins, so this seems to be in sync.”
    3. etc.
  4. Keep in mind that you’re only looking at a snapshot of the data at this point in time, so all assumptions you’re making may not be true forever, unless they’re explicitly documented and asserted in code (see 1.).
  5. If you’re working with a large data set that’s very slow to query, pick a “reasonable” subset, e.g. restricted to a specific time frame.
  6. I usually just look at numbers, but occasionally having some lightweight data visualization can be helpful to see trends. The resources I’ve listed below have some more content on using data visualization for EDA.

How-to

There is no exact playbook for EDA since the steps depend on the type of data you’re looking at. Here are some high-level steps to follow:

  • Print some sample data of the table (first few rows, or use a sample function if available), just to look at what kind of data each column contains.
    • If using Pandas, transposing a dataframe (df.T) can be helpful for reading through wide tables.
  • Print the data types for each column. Keep in mind that the database datatype and logical datatype might be different, e.g. an integer field may be used to represent a boolean value with 0/1.
  • Identify the primary key and potential foreign key columns, and relevant timestamp fields (e.g. created date, last updated).
  • Get some basic numbers for relevant fields:
    • Table row count
    • Unique count for the “primary key” field: Does it match the row count, i.e. is it really a unique primary key? If not, is there a set of fields that can uniquely identify a record, e.g. ID field + timestamp.
    • Min/max for numeric and date fields: What ranges are we looking at?
      • Note if there are values that look like dummy values, e.g. “1900-01-01” for dates, or dates in the future.
      • Note if there are values that look like outliers based on the column name, e.g. a “2088” in a field named “customer_age”.
    • Group by and counts for value set columns, i.e. categorical variables such as boolean fields or values from a fixed set, e.g. “service category”
      • Pay attention to NULL values – how sparsely populated is this column? Can we expect NULL values at all? Or do we have another “dummy” value that represents NULL, e.g. “Unknown”?
      • For boolean fields, do we have only true/false values, or do we also have true/false/NULL and if yes, what does NULL represent?
      • It’s fairly critical to find out which fields are free-text and which ones have controlled input. The database datatype in both cases will be text, but the logical type is either controlled input (categorical) or free-text.
  • Look at the distribution of record counts over time: Identify the relevant date field (if exists) and count the number of records for a reasonable time period, e.g. by month or year.
    • This gives us an idea of the volume of data to expect over time.
    • It also helps see at what point the data starts to be “complete”, better than just looking at the earliest date since we might just have a handful of records for specific dates.
  • If possible, use a library like Pandas Profiling to get a “profile” of the data
    • This captures most of the basic stats listed under “Get some basic numbers” as well as more detailed histograms, correlations, etc.
    • It can get a little unwieldy for large table, so might make sense to only focus on a subset of relevant fields.
  • If working with multiple tables, try and draw out a simplified high-level ERD (entity relationship diagram) to get an idea of how the tables join together and whether we have referential integrity.
    • Run the joins and confirm whether join fields always match or whether there are some “empty joins”. E.g. ask questions such as “Does every service_level_id in the submission table have a corresponding record in the service_level lookup table?”

Other resources

I haven’t found too many posts that I found particularly helpful (maybe all the good content is tucked away in books?). Here are a few links to other sites that may be useful and complement this post:

Header image by Miguel Tejada-Flores via Flickr (CC BY-NC 2.0)

What’s next?

When I joined Flatiron Health in February 2014, I had no idea what to expect. I had just moved to New York City – my second ever trip to the US – with two suitcases, crashed on my friend’s couch, and walked into the office in the middle of a snowstorm (I got in late on my first day because I was left stranded by the MTA – pro move!). I was on a 1-year visa and didn’t even know whether it was going to get extended after the year was up, or whether the 20-person startup I had just joined after finishing my PhD in England was even going to last that long.

Almost 5 1/2 years later I’m now looking back onto many late nights at the office, countless meals with my work family, a few drinks (just a few, really!), late night karaoke, rafting and ski trips, pipeline breaks and product launches, both great and absolutely horrifying client calls, several rounds of funding, an acquisition (us buying a company twice our size), another acquisition (this time us getting acquired), almost a thousand new employees, many farewells, wonderful relationships, challenging relationships, my first intern, my first direct report, my first time as a team lead, and my first goodbye to a company that I still talk about as “we” even though I officially left almost a month ago. As I like to tell people who ask me about my time at Flatiron: It’s been a wild ride.

So… what’s next? Honestly, I don’t know. I want to continue doing “data stuff”, but as a non-traditional (as far as the word “traditional” applies to a fairly new field) data scientist who puts data empathy and interpretability before building ML models, it’s going to be an interesting challenge to find the right fit for me. For now, I’m still based in NYC, enjoying the summer, plotting some travel, and reflecting on the things I’ve learned over the past few years.