New: Frontend mock interviews added to Exponent Practice →
Adobe logo

Adobe Data Scientist Interview Guide

Kevin Landucci
Kevin LanducciContent Manager
Learn how to prepare for the Adobe Data Scientist interview and get a job at Adobe with this in-depth guide.

The gist

Adobe is a paradox: for a company that provides widely-respected industry-leading products, they’ve got a markedly chill company culture. For example, the unwritten rule of thumb for getting an offer at Adobe is that you have to be better than the average data scientist currently there. That’s a stark contrast from Amazon’s famous rule where you have to be in the top 10–15% of current employees to get hired.

Adobe has an almost academic approach to interviewing; they haphazardly test your theoretical knowledge throughout the process. Based on your answers, Adobe likes to oscillate between asking and riffing questions on the spot to “peel the onion.” For example, if you talk about past projects, expect follow-up questions asking you to share some theory for a technology or term you mentioned in those personal projects.

Adobe’s final round assesses candidates on four factors: technical, analytical, communication, and teamwork. The penultimate test of the whole process is presenting your take-home project (at the final round) to a panel of technical and non-technical interviewers.

The average total compensation among the data science levels at Adobe is:

  • L2 = $181k
  • L3 = $229k
  • L4 (senior) = $247k
  • L5= $426k

This guide was written with the help of data science interviewers at Adobe.

Before you apply

Interview process

Adobe has a fairly similar process among most of its teams. There will be some variance, but most candidates will go through four rounds:

  • Recruiter screen to ensure you’re a fit on the surface level
  • Hiring manager screen to gauge your technical and soft skills
  • Technical screen measuring your table-stakes coding and analysis skills
  • Final round interview with five rounds, including presenting your take-home to a panel

Recruiter

After filtering down candidates based on their resumes, the first step of the interview is a recruiter call. This call at Adobe is pretty standard compared to other large organizations.

Some recruiters ask technical questions and judge candidates based on whether or not they use the exact keywords the hiring manager told them to look out for. For example, the hiring manager could tell the recruiter to “look out for mentions of ‘instrumentation’ when they talk about their past projects.”

Sample questions include:

  • What are you looking for in your next role?
  • Why Adobe?
  • How many years of experience do you have in SQL (or Python or experimentation)?

Hiring manager

This is a medium-level-of-depth technical and soft skill assessment. The typical round will involve a simple problem (to see if candidates can apply some analytical framework and solve it quickly) and then a project overview.

You get to pick which project to discuss in the project overview. Based on the technical topics you raise while talking about projects, interviewers typically “peel the onion” and ask for more details about those topics. So, if you bring up experimentation, the follow-ups are likely to request more details on relevant sub-topics of experimentation (such as “What is alpha?” or “What is beta?”)

The most common mistake in this round is talking about the wrong project. For example, if the role is bullish on experimentation and the candidate gives an overview of a project that has nothing to do with experimentation.

Less senior candidates tend to focus more on execution, and more senior candidates tend to focus more on impact. Driving revenue is the best metric to mention impact, followed by cutting costs. The rough tipping point is $5,000,000; anything above this is considered at the level of magnitude on par with Adobe.

Sample questions include:

  • Tell me about one of your projects where experimentation was involved.
  • Tell me about your research topics. What is the difference between your methods and classic methods?

Technical

The 45-minute tech screen done on Coderpad measures table-stakes technical skills; it’s split into two 20-minute halves, which measure coding and case question skills.

For the coding portion, most candidates use SQL, but the questions could be solved in either SQL or Python. The typical SQL portion has three different questions which increase in complexity. Candidates get a mock data set which they evaluate and then write queries for.

For the case question portion, candidates are tested on their analytical skills with an ambiguous prompt such as: “Here is a test we’re going to run. How would you go about doing this?” The candidate should clarify the problem and develop the key metrics they would track. The metrics give Adobe a signal on whether or not this candidate understands the business they are in.

Once you’ve picked your metrics, you’ll be given mock results to interpret. The purpose of this is to answer the question: “Based on these results, should we launch or not? And why?”

Topics to prepare for:

  • Joins
  • Window functions like LAG
  • Self join
  • Group by
  • Percentage calculation
  • Aggregations
  • Subqueries
  • Sample size calculation

Sample questions include:

  • Name two data types that are mutable and two that are immutable.
  • How would you handle an imbalanced dataset?

Talk about trade-offs. Discussing the pros and cons of different approaches is a good way to score points with your interviewer.

Final round

The typical final interview has 5 rounds: hiring manager, product manager, sensitivity analysis, coding, and case study. The most critical portion of the final round is the case study interview. You will be given a take-home project beforehand, which you will present to a panel for 20–30 minutes. Typically, 10–25 minutes is reserved for Q&A.

Interview questions

Panel case presentation

The 45-minute case presentation will be presented to a handful of leaders, data scientists, and PMs. Product managers will assess the business aspect, data scientists the functional, and leaders the behavioral.

The prompt is usually a broad problem, such as: “You are part of this pricing team. You’re tasked with understanding the patterns in our customer purchase behavior. Your task is to build a pricing model, test it, and make a recommendation.”

Some candidates take a heuristic approach, while others build an ML model (where they add more data on top of the data they’ve been given). The heuristic approach is segment-level (e.g., “for this segment of customers, we should offer this price”), and the ML data is more of a user-level approach. Either one of these approaches can work.

Don’t say:

The most likely way to mess up this round is poor structure in the presentation. This can manifest in long-winded, unnecessary tangents. Or, not making a clear recommendation at the end; doing all the analysis but then not actually making a choice about what we should do about it.

Do say:

The ideal structure to follow is stating the background (the “why”), followed by a clear,  quantifiable hypothesis (such as “We are doing X to improve Y by doing Z”), a good set of key metrics (including a primary metric, secondary metric, and guardrail metric), the set-up of the experiment, the foreseen challenges, and then the recommendation for the business.

Metrics have the most variance in terms of how different candidates approach it. There are lots of ways to do this wrong, but the most common way is to have too many metrics. Less is more when it comes to metrics; it’s better to have 5-6 really well-thought-out metrics than 10-20 half-baked metrics. When talking about metrics, the question you don’t want to hear is a PM saying, “I don’t get why this metric is important.” That is common. The way to avoid that is to tell them why the metric is important to them, upfront before they get a chance to ask.

Note: This round provides the most reliable signal for whether you’ll fit in at Adobe.

One way to stand out is to discuss trade-offs throughout your presentation, in particular at the end when you recommend a course of action. A hiring manager described the strongest performance he had seen as: the candidate made multiple models and offered optionality, saying something like, “If the objective was X, I’d use this model, and if the objective was Z, I’d use that model.” This showed an acute knowledge of the trade-offs involved.

Anecdote from a hiring manager: “The best presentation I’ve seen stood out because it could have passed as an internal presentation.

The slides they put together looked exactly like what we do internally. They looked at the website and used the same components we do—the same color theme, and style. The folks on the panel joked, ‘Is this an internal presentation or an external presentation?’ It was really clear this candidate had done their homework.

Coding

SQL at the onsite is quite similar to the SQL portion of the tech screen. Traditionally, at the onsite, it’s a whiteboard coding round focused on SQL.

Sample topics include:

  • Sorting algorithms
  • Arrays

Sample questions include:

Product manager

This round measures your soft skills, product sense, and past experience working with product managers. This is a fairly conversational round with small analytical questions or hypotheticals based on your past experience.

Sample questions include:

  • How would you explain the correlations you see in this colored plot?
    • This question tests your ability to tell a convincing story based on a plot drawn on a whiteboard.

Sensitivity analysis

The interviewer in this round is either a data engineer or a financial analyst. In either case, this is a technical round.

In the financial analyst case, you're given a model and asked to come up with different scenarios, such as best-case, low-case, and moderate scenarios. It’s about trying out multiple variables to decide how the outcome will change.

If your interviewer is a data engineer, you’ll work through a data modeling problem or talk about your experience working with data engineering. You don’t need to go deep into data modeling, but you’ll have to build a prototype.

Sample questions include:

  • Create confidence intervals for a finite sampled population.
  • For large-scale categorical variable output regression with a large number of features, how do you decide when to use logistic expression and when to use collaborative filtering?

Hiring manager

It's typical for candidates to have a behavioral round with the hiring manager during their final round. That’s two rounds with the hiring manager (in the entire interview loop); Adobe hiring managers get roughly twice as much face time with interview candidates as most other companies! This is an opportunity to impress the person with the most sway.

Themes of questions:

  • Conflict management
  • Career path
  • Self-awareness

Additional resources

FAQs

How should I prepare for an Adobe Data Scientist interview?

Brush up on your SQL, ML concepts, probability, and, most importantly, your presentation skills.

How much do Adobe Data Scientists make?

The average total compensation is:

  • L2 = $181k
  • L3 = $229k
  • L4 (senior) = $247k
  • L5= $426k

How long is the Adobe Data Scientist interview process?

The Adobe process typically takes less than 1-2 months to complete.

Learn everything you need to ace your Data Scientist interviews.

Exponent is the fastest-growing tech interview prep platform. Get free interview guides, insider tips, and courses.

Create your free account