Adobe is a paradox: for a company that provides widely-respected industry-leading products, they’ve got a markedly chill company culture. For example, the unwritten rule of thumb for getting an offer at Adobe is that you have to be better than the average data scientist currently there. That’s a stark contrast from Amazon’s famous rule where you have to be in the top 10–15% of current employees to get hired.
Adobe has an almost academic approach to interviewing; they haphazardly test your theoretical knowledge throughout the process. Based on your answers, Adobe likes to oscillate between asking and riffing questions on the spot to “peel the onion.” For example, if you talk about past projects, expect follow-up questions asking you to share some theory for a technology or term you mentioned in those personal projects.
Adobe’s final round assesses candidates on four factors: technical, analytical, communication, and teamwork. The penultimate test of the whole process is presenting your take-home project (at the final round) to a panel of technical and non-technical interviewers.
The average total compensation among the data science levels at Adobe is:
This guide was written with the help of data science interviewers at Adobe.
Adobe has a fairly similar process among most of its teams. There will be some variance, but most candidates will go through four rounds:
After filtering down candidates based on their resumes, the first step of the interview is a recruiter call. This call at Adobe is pretty standard compared to other large organizations.
Some recruiters ask technical questions and judge candidates based on whether or not they use the exact keywords the hiring manager told them to look out for. For example, the hiring manager could tell the recruiter to “look out for mentions of ‘instrumentation’ when they talk about their past projects.”
Sample questions include:
This is a medium-level-of-depth technical and soft skill assessment. The typical round will involve a simple problem (to see if candidates can apply some analytical framework and solve it quickly) and then a project overview.
You get to pick which project to discuss in the project overview. Based on the technical topics you raise while talking about projects, interviewers typically “peel the onion” and ask for more details about those topics. So, if you bring up experimentation, the follow-ups are likely to request more details on relevant sub-topics of experimentation (such as “What is alpha?” or “What is beta?”)
The most common mistake in this round is talking about the wrong project. For example, if the role is bullish on experimentation and the candidate gives an overview of a project that has nothing to do with experimentation.
Less senior candidates tend to focus more on execution, and more senior candidates tend to focus more on impact. Driving revenue is the best metric to mention impact, followed by cutting costs. The rough tipping point is $5,000,000; anything above this is considered at the level of magnitude on par with Adobe.
Sample questions include:
The 45-minute tech screen done on Coderpad measures table-stakes technical skills; it’s split into two 20-minute halves, which measure coding and case question skills.
For the coding portion, most candidates use SQL, but the questions could be solved in either SQL or Python. The typical SQL portion has three different questions which increase in complexity. Candidates get a mock data set which they evaluate and then write queries for.
For the case question portion, candidates are tested on their analytical skills with an ambiguous prompt such as: “Here is a test we’re going to run. How would you go about doing this?” The candidate should clarify the problem and develop the key metrics they would track. The metrics give Adobe a signal on whether or not this candidate understands the business they are in.
Once you’ve picked your metrics, you’ll be given mock results to interpret. The purpose of this is to answer the question: “Based on these results, should we launch or not? And why?”
Topics to prepare for:
Sample questions include:
Talk about trade-offs. Discussing the pros and cons of different approaches is a good way to score points with your interviewer.
The typical final interview has 5 rounds: hiring manager, product manager, sensitivity analysis, coding, and case study. The most critical portion of the final round is the case study interview. You will be given a take-home project beforehand, which you will present to a panel for 20–30 minutes. Typically, 10–25 minutes is reserved for Q&A.
The 45-minute case presentation will be presented to a handful of leaders, data scientists, and PMs. Product managers will assess the business aspect, data scientists the functional, and leaders the behavioral.
The prompt is usually a broad problem, such as: “You are part of this pricing team. You’re tasked with understanding the patterns in our customer purchase behavior. Your task is to build a pricing model, test it, and make a recommendation.”
Some candidates take a heuristic approach, while others build an ML model (where they add more data on top of the data they’ve been given). The heuristic approach is segment-level (e.g., “for this segment of customers, we should offer this price”), and the ML data is more of a user-level approach. Either one of these approaches can work.
❌ Don’t say:
The most likely way to mess up this round is poor structure in the presentation. This can manifest in long-winded, unnecessary tangents. Or, not making a clear recommendation at the end; doing all the analysis but then not actually making a choice about what we should do about it.
✅ Do say:
The ideal structure to follow is stating the background (the “why”), followed by a clear, quantifiable hypothesis (such as “We are doing X to improve Y by doing Z”), a good set of key metrics (including a primary metric, secondary metric, and guardrail metric), the set-up of the experiment, the foreseen challenges, and then the recommendation for the business.
Metrics have the most variance in terms of how different candidates approach it. There are lots of ways to do this wrong, but the most common way is to have too many metrics. Less is more when it comes to metrics; it’s better to have 5-6 really well-thought-out metrics than 10-20 half-baked metrics. When talking about metrics, the question you don’t want to hear is a PM saying, “I don’t get why this metric is important.” That is common. The way to avoid that is to tell them why the metric is important to them, upfront before they get a chance to ask.
Note: This round provides the most reliable signal for whether you’ll fit in at Adobe.
One way to stand out is to discuss trade-offs throughout your presentation, in particular at the end when you recommend a course of action. A hiring manager described the strongest performance he had seen as: the candidate made multiple models and offered optionality, saying something like, “If the objective was X, I’d use this model, and if the objective was Z, I’d use that model.” This showed an acute knowledge of the trade-offs involved.
Anecdote from a hiring manager: “The best presentation I’ve seen stood out because it could have passed as an internal presentation.
The slides they put together looked exactly like what we do internally. They looked at the website and used the same components we do—the same color theme, and style. The folks on the panel joked, ‘Is this an internal presentation or an external presentation?’ It was really clear this candidate had done their homework.”
SQL at the onsite is quite similar to the SQL portion of the tech screen. Traditionally, at the onsite, it’s a whiteboard coding round focused on SQL.
Sample topics include:
Sample questions include:
This round measures your soft skills, product sense, and past experience working with product managers. This is a fairly conversational round with small analytical questions or hypotheticals based on your past experience.
Sample questions include:
The interviewer in this round is either a data engineer or a financial analyst. In either case, this is a technical round.
In the financial analyst case, you're given a model and asked to come up with different scenarios, such as best-case, low-case, and moderate scenarios. It’s about trying out multiple variables to decide how the outcome will change.
If your interviewer is a data engineer, you’ll work through a data modeling problem or talk about your experience working with data engineering. You don’t need to go deep into data modeling, but you’ll have to build a prototype.
Sample questions include:
It's typical for candidates to have a behavioral round with the hiring manager during their final round. That’s two rounds with the hiring manager (in the entire interview loop); Adobe hiring managers get roughly twice as much face time with interview candidates as most other companies! This is an opportunity to impress the person with the most sway.
Themes of questions:
Brush up on your SQL, ML concepts, probability, and, most importantly, your presentation skills.
The average total compensation is:
The Adobe process typically takes less than 1-2 months to complete.
Exponent is the fastest-growing tech interview prep platform. Get free interview guides, insider tips, and courses.
Create your free account