AI-powered PitchBook Private Data Pipeline System

AI -Powered PitchBook Private Data Pipeline System Design

PitchBook: Designing an AI-powered Data Upload and Verification Tool

PitchBook: Designing an AI-powered Data Upload and Verification Tool

At a Glance
I led the design of PitchBook’s first AI-assisted data operations system — successfully pitching it to Chief Product Officer for a place on the product roadmap.

At a Glance

I led the design of PitchBook’s first AI-assisted data operations system — successfully pitching it to Chief Product Officer for a place on the product roadmap.

My Role: Product Design Intern, Expanded into AI Design Strategy & Product Design
Team: 1 Product Designer, 2 PMs, 1 Design Manager
Duration: 10-Week Sprint
Key Contributions: Driving a C-Suite Initiative, Leading AI Design Strategy, Owning End-to-End UX/UI, Systems & Workflow Optimization
Tools: Figma, Uizard AI, Figma Make, Lovable, Claude & Gemini, LucidChart, Jira & Confluence
Outcome: Designed a cornerstone AI tool for PitchBook’s new company-wide data platform, projected to make private data operations:

My Role: Leading project design as Product Design Intern
Team: 1 Product Designer, 2 PMs, 1 Design Manager
Duration: 10-Week Sprint
Key Contributions: Leading AI Design Implementation, Owning End-to-End UX/UI, Systems & Workflow Optimization
Tools: Figma, Uizard AI, Figma Make, Lovable, Claude & Gemini, LucidChart, Jira & Confluence
Outcome: Designed a cornerstone AI tool for PitchBook’s new company-wide data platform, projected to make data intake operations:

At a Glance

I drove the AI design strategy for PitchBook’s first private data tool redesign — a CPO-backed initiative that laid the foundation for the company’s new platform and added to the official roadmap.

My Role: Product Design Intern, Expanded into AI Design Strategy & Product Design
Team: 1 Product Designer, 2 PMs, 1 Design Manager
Duration: 10-Week Sprint
Key Contributions: Driving a C-Suite Initiative, Leading AI Design Strategy, Owning End-to-End UX/UI, Systems & Workflow Optimization
Tools: Figma, Uizard AI, Figma Make, Lovable, Claude & Gemini, LucidChart, Jira & Confluence
Outcome: Designed a cornerstone AI tool for PitchBook’s new company-wide data platform, projected to make private data operations:

2.6X Faster

Data operations

Data operations

62% Less Time

For data upload and verification

For data upload and verification

Context

Context

Problem: After merging with Morningstar's public operations, PitchBook's data operations needed to adapt to new scale and complexity.


My Mission: Design a tool to drastically accelerate data pipeline and improve accuracy and reduce researchers cognitive burden, shifting their work from tedious data entry to high-impact strategic review.


Solution: I designed a AI data review system that automates simple data edits and guides complex edits through AI confidence scores, transforming their role from data entry to strategic oversight.

Problem: After merging with Morningstar's public operations, PitchBook's data operations needed to adapt to new scale and complexity.


My Mission: Design a tool to drastically accelerate data pipeline and improve accuracy and reduce researchers cognitive burden, shifting their work from tedious data entry to high-impact strategic review.


Solution: I designed a AI data review system that automates simple data edits and guides complex edits through AI confidence scores, transforming their role from data entry to strategic oversight.

Research and Discovery

Research and Discovery .

PitchBook dominates private market data critical to PE/VC decisions. The Primary Data Sourcing (PDS) team plays a crucial role in ensuring that private data remains consistently updated and reliable, using a sophisticated research tool suite (RTS).

However, the process of updating company profiles within the existing RTS system is complex and inefficient, primarily due to the manual steps required to transfer data between the Survey Portal Review Tool and RTS. To enhance data upload efficiency and minimize error rates, PDS researchers require a more streamlined process.

PitchBook dominates private market data critical to PE/VC decisions. The Primary Data Sourcing (PDS) team plays a crucial role in ensuring that private data remains consistently updated and reliable, using a sophisticated research tool suite (RTS).

However, the process of updating company profiles within the existing RTS system is complex and inefficient, primarily due to the manual steps required to transfer data between the Survey Portal Review Tool and RTS. To enhance data upload efficiency and minimize error rates, PDS researchers require a more streamlined process.

Survey Submission

Company completes and submits their data survey online or through email.

Researcher Analysis

PDS Researchers analyze edits and update the private data system (RTS)

Data on Platform

Private market data is updated and available to PitchBook clients.

Research and Discovery .

Key User Segment & Pain Points (From PRD)

Key User Segment & Pain Points (From PRD)

Coming into the project, there was a decent amount of user research already available. Since this was an internal user, their user persona was decently well defined. This is the information I followed before diving deeper into the problem space myself.

Coming into the project, there was a decent amount of user research already available. Since this was an internal user, their user persona was decently well defined. This is the information I followed before diving deeper into the problem space myself.

THE USER

THE USER

Primary Data Sourcing Researchers at PitchBook, the backbone of the company's data operations.

Primary Data Sourcing Researchers at PitchBook, the backbone of the company's data operations.

THEIR GOALS

THEIR GOALS

To enter and verify complex survey data with maximum efficiency and accuracy as quickly as possible.

To enter and verify complex survey data with maximum efficiency and accuracy as quickly as possible.

THE STAKES

THE STAKES

They operate under aggressive performance goals, such as processing 26 surveys per day.

They operate under aggressive performance goals, such as processing 26 surveys per day.

PAIN POINTS

PAIN POINTS

Workflow Inefficiency & Context Switching

Theme 1: Workflow Inefficiency & Context Switches

Constant context-switching between tools which wasted valuable time.

Constant context-switching between tools which wasted valuable time.

Lack of Clarity & Progress Tracking

Lack of Clarity & Progress Tracking

Difficulty tracking progress, causing researchers to re-check their work.

Difficulty tracking progress, causing researchers to re-check their work.

Low Confidence & Fear of Errors

Low Confidence & Fear of Errors

A persistent worry of making unrecoverable, data-damaging errors

A persistent worry of making unrecoverable, data-damaging errors

Discovery: Beyond The PRD

Discovery: Beyond The PRD

To fully understand this problem space, I had to go beyond the research provided from the PRD and investigate the user pain points.

To fully understand this problem space, I had to go beyond the research provided from the PRD and investigate the user pain points.

User Research Methodology:

User Research Methodology:

  • 10+ stakeholder interviews with PDS researchers across Seattle and Mumbai

  • 3 shadow sessions observing daily workflows

  • Analysis of performance metrics and error rates

  • 10+ stakeholder interviews with PDS researchers across Seattle and Mumbai

  • 3 shadow sessions observing daily workflows

  • Analysis of performance metrics and error rates

The “Aha” moment

The “Aha” moment

During my 6am meeting with the Mumbai research teams, I discovered something crucial. The researchers’ workflow wasn’t just inefficient from tab switching for edits; there was also mental friction from verifying data through external sources.

There was cognitive strain caused by the unstructured verification search across multiple sources, while trying to main accuracy and speed. The data verification process was inefficient for two key reasons:

During my 6am meeting with the Mumbai research teams, I discovered something crucial. The researchers’ workflow wasn’t just inefficient from tab switching for edits; there was also mental friction from verifying data through external sources.

There was cognitive strain caused by the unstructured verification search across multiple sources, while trying to main accuracy and speed. The data verification process was inefficient for two key reasons:

Cognitive Strain: Researchers experienced mental friction from having to search through unstructured internal notes and various external sources.

Time Consumption: This multi-source search was extremely time-consuming, delaying how quickly updated data reached end-users.

Cognitive Strain: Researchers experienced mental friction from having to search through unstructured internal notes and various external sources.

Time Consumption: This multi-source search was extremely time-consuming, delaying how quickly updated data reached end-users.

To help conceptualize my target user's current pain points and workflow, I created a series of user flows.

To help conceptualize my target user's current pain points and workflow, I created a series of user flows.

Editing process and pain points on the left; data verification and pain points on the right.

Editing process and pain point on left, Data verification and pain points on right (better on desktop)

Ideation Phase

Ideation Phase .

Ideation Phase.

Based on my discovery findings, I created potential solutions and started sketching, creating user flows, and making wireframes in Uizard to help communicate impactful features.

Based on my discovery findings, I created potential solutions and started sketching, creating user flows, and making wireframes in Uizard to help communicate impactful features.

Ideation Goals:

  • Increase the speed of data edit uploads.

  • Boost accuracy during the editing process.

  • Improve efficiency by keeping researchers within a single tool.

  • Boost researcher confidence in the edits they make

Ideal user Low with resolved pain points (green dots)

Foundational Quick Wins

Foundational Quick Wins

These "quick win" features were designed for immediate, high-impact improvements:

  • Direct Edits: Allowing users to approve/reject edits directly within the survey kept them from having to switch tools.

  • Flagging Tool: Saved researchers immense time by letting them flag items that needed company outreach and deal with them all at the end.

  • Internal Notes Access: Allowed researchers to validate data without having to switch back to the RTS system.

These "quick win" features were designed for immediate, high-impact improvements:

  • Direct Edits: Allowing users to approve/reject edits directly within the survey kept them from having to switch tools.

  • Flagging Tool: Saved researchers immense time by letting them flag items that needed company outreach and deal with them all at the end.

  • Internal Notes Access: Allowed researchers to validate data without having to switch back to the RTS system.

These edits were estimated to reduce data upload time by 20% by my PM. They are high impact and initially thought to be cheap to develop.

These edits were estimated to reduce data upload time by 20% by my PM. They are high impact and initially thought to be cheap to develop.

The AI Visionary Solution: AI Automation and Verification

The AI Visionary Solution: AI Automation and Verification

After meeting with product leadership about AI, I realized our verification process was perfect for automation.

After meeting with product leadership about AI, I realized our verification process was perfect for automation.

How the AI Confidence Scores Work:

  • Automated Edits: If the confidence rating for an edit was high, the system would automatically approve it, requiring no human intervention.

  • Human Review: For edits with medium or low confidence, the system would flag them for manual verification by a researcher.

  • Guided Verification: The confidence score would provide its underlying logic and direct the user to helpful internal and external sources to speed up the review process.

  • Explainable AI (XAI): Researchers could see the exact sources the AI used to make its decision, ensuring transparency and building trust in the system.

How the AI Confidence Scores Work:-

  • Automated Edits: If the confidence rating for an edit was high, the system would automatically approve it, requiring no human intervention.

  • Human Review: For edits with medium or low confidence, the system would flag them for manual verification by a researcher.

  • Guided Verification: The confidence score would provide its underlying logic and direct the user to helpful internal and external sources to speed up the review process.

  • Explainable AI (XAI): Researchers could see the exact sources the AI used to make its decision, ensuring transparency and building trust in the system.

Projected to reduce edit time by over 50% through intelligent automation (calculated through existing performance metrics)

Projected to reduce edit time by an estimated 50% through intelligent automation (calculated through existing performance metrics)

Early Validation & AI/ML Collaboration

Early Validation & AI/ML Collaboration

To ensure my designs were impactful, I set up time with the data operations team to receive feedback. They also informed me that multiple other teams would want to jump on board with an AI tool that can confirm data through the strict methodology their team follows. This feedback made me realize that I could create an AI tool that can be scalable across multiple teams and operations.

After getting the green light from the whole design team, I started collaborating with the head AI/ML PM to get feedback. The PM was enthusiastic about the idea and told me about similar company-wide AI initiatives around confidence scores and automation — my project had accidentally aligned with major strategic priorities.

They needed a dedicated designer for these initiatives and I stepped up — as an intern.

To ensure my designs were impactful, I set up time with the data operations team to receive feedback. They also informed me that multiple other teams would want to jump on board with an AI tool that can confirm data through the strict methodology their team follows. This feedback made me realize that I could create an AI tool that can be scalable across multiple teams and operations.

After getting the green light from the whole design team, I started collaborating with the head AI/ML PM to get feedback. The PM was enthusiastic about the idea and told me about similar company-wide AI initiatives around confidence scores and automation — my project had accidentally aligned with major strategic priorities.

They needed a dedicated designer for these initiatives and I stepped up — as an intern.

Designing for Immediate Impact

Designing for Immediate Impact

After the features were validated by stakeholders, it was time to bring my tier 1 designs to high-fidelity — here are the first iterations of the phase 1 edits.

After the features were validated by stakeholders, it was time to bring my tier 1 designs to high-fidelity — here are the first iterations of the phase 1 edits.

To prove the immediate impact of these simple, high-impact features, I collaborated with a senior designer to estimate the impact. Using Morningstar research performance metrics, and shadow session notes, we estimated a 31% time reduction for data upload time, cutting 315 hours of manual work every month.

To prove the immediate impact of these simple, high-impact features, I collaborated with a senior designer to estimate the impact. Using Morningstar research performance metrics, and shadow session notes, we estimated a 31% time reduction for data upload time, cutting 315 hours of manual work every month.

Strategic Pivot To AI-Powered Solution

Strategic Pivot To AI-Powered Solution .

Strategic Pivot To AI-Powered System.

As these simple, high-impact solutions were getting finalized, a new company-wide initiative (UDCP) created a major engineering constraint. This new constraint changed the entirety of my internship — I shifted from a project of incremental improvements, to a complete AI-powered redesign for massive impact.


This new Unified Data Collection Platform combines both private and public data operations into one unified platform, boosting data upload efficiency and accuracy for PitchBook and Morningstar. Leadership questioned if my designs would fit the new system and considered putting my original project on the back burner.


Initially, in a high-stakes meeting with senior PMs and my design Manager, I advocated for my phase 1 “quick wins,” as they provided immediate value to the research team. I argued the immediate impact of 31% time reduction in data upload and verification for the survey tool. We compromised: we would investigate the engineering scope of the 'quick wins' before making a final decision.

As these simple, high-impact solutions were getting finalized, a new company-wide initiative (UDCP) created a major engineering constraint. This new constraint changed the entirety of my internship — I shifted from a project of incremental improvements, to a complete AI-powered redesign for massive impact.


This new Unified Data Collection Platform combines both private and public data operations into one unified platform, boosting data upload efficiency and accuracy for PitchBook and Morningstar. Leadership questioned if my designs would fit the new system and considered putting my original project on the back burner.


Initially, in a high-stakes meeting with senior PMs and my design Manager, I advocated for my phase 1 “quick wins,” as they provided immediate value to the research team. I argued the immediate impact of 31% time reduction in data upload and verification for the survey tool. We compromised: we would investigate the engineering scope of the 'quick wins' before making a final decision.

Engineering Scope to Massive Scope Expansion

Engineering Scope to Massive Scope Expansion

Even for the most basic features, the Ukraine engineering team estimated it would take 2 whole quarters to develop. These “quick wins” weren’t quick at all in reality. With this new data, it was clear that pursuing the high-impact AI features was the only logical path forward, as the phase 1 edits were no longer feasible.

While shifting towards the AI features, my manager told me that for these AI features to be fully developed, the whole survey needed to fit into UDCP. My manager told me that AI features needed to be universal across the entire platform, and that my survey redesign had to contain these new designs.

She gave me an out, asking if I wanted to focus on a simpler UI intern-level project and pass these senior-level responsibilities to a more experienced designer. I immediately said no - I wanted the challenge.

My project requirements expanded, and I had to lead the design of the AI automation, confidence score, and full redesign of the survey tool for this new platform.

Even for the most basic features, the Ukraine engineering team estimated it would take 2 whole quarters to develop. These “quick wins” weren’t quick at all in reality. With this new data, it was clear that pursuing the high-impact AI features was the only logical path forward, as the phase 1 edits were no longer feasible.

While shifting towards the AI features, my manager told me that for these AI features to be fully developed, the whole survey needed to fit into UDCP. My manager told me that AI features needed to be universal across the entire platform, and that my survey redesign had to contain these new designs.

She gave me an out, asking if I wanted to focus on a simpler UI intern-level project and pass these senior-level responsibilities to a more experienced designer. I immediately said no - I wanted the challenge.

My project requirements expanded, and I had to lead the design of the AI automation, confidence score, and full redesign of the survey tool for this new platform.

Expanded Mission Design Iterations

Expanded Mission Design Iteration .

New Expanded Mission

  • Redesign the survey tool for UDCP, integrating the AI tools I've designed to support the tool

  • Lead the design of the AI Automation and Confidence Score tools across all of UDCP and get them approved by leadership

  • Define the technical implementation to scale these AI tools with the lead PM for AI/ML

  • Redesign the survey tool for UDCP, integrating the AI tools I've designed to support the tool

  • Lead the design of the AI Automation and Confidence Score tools across all of UDCP and get them approved by leadership

  • Define the technical implementation to scale these AI tools with the lead PM for AI/ML

Jumping into Solutions

Jumping into Solutions

My focus shifted to designing features I knew would be impactful, with a core emphasis on explainable AI (XAI) for the confidence score. My first task was to fit the old survey tool into the universal UDCP format.

My focus shifted to designing features I knew would be impactful, with a core emphasis on explainable AI (XAI) for the confidence score. My first task was to fit the old survey tool into the universal UDCP format.

AI Automation Communication/Override and Logic

The main focus here though were the AI solutions. The current mid-fidelity AI designs I took over lived within a drawer and I had to redesign the survey with this in mind for UDCP. As I redesigned these tools, AI transparency was crucial as researchers needed to immediately know what had been automated by AI, and what exactly was expected of them.

The main focus here though were the AI solutions. The current mid-fidelity AI designs I took over lived within a drawer and I had to redesign the survey with this in mind for UDCP. As I redesigned these tools, AI transparency was crucial as researchers needed to immediately know what had been automated by AI, and what exactly was expected of them.

Confidence Score Example

Confidence score drawer in ECS (a Public tool in UDCP)

A core principle of my design was ensuring researchers always felt in control, giving them the ability to undo any AI-automated edit and see the logic behind it (XAI), avoiding "black box" problems.

A core principle of my design was ensuring researchers always felt in control, giving them the ability to undo any AI-automated edit and see the logic behind it (XAI), avoiding "black box" problems.

Critical Design Review & The Bigger Picture

Critical Design Review & The Bigger Picture

During a review with 20+ designers, I asked specific questions and the team identified critical holes in the workflow and AI communication features:

During a review with 20+ designers, I asked specific questions and the team identified critical holes in the workflow and AI communication features:

Disconnected & Fragmented Workflow

This initial design failed because it lacked a clear, unified task zone. As the arrows illustrate, users were forced to jump between the top navigation (to see the summary and submit), the main content area (to see data in context), and the bottom drawer (to review the AI suggestions and navigate). This chaotic and fragmented path created high cognitive load and directly contradicted the primary user goal of a streamlined, efficient workflow.

Disconnected & Fragmented Workflow
This initial design failed because it lacked a clear, unified task zone. As the arrows illustrate, users were forced to jump between the top navigation (to see the summary and submit), the main content area (to see data in context), and the bottom drawer (to review the AI suggestions and navigate). This chaotic and fragmented path created high cognitive load and directly contradicted the primary user goal of a streamlined, efficient workflow.

Ineffective AI Communication

While the detailed logic within the confidence scores was praised, the general communication of AI actions was seen as a distraction. Designers argued that highlighting past AI edits and using color highlighting was "overkill," pulling users from their immediate tasks rather than enhancing the workflow.

Ineffective AI Communication
While the detailed logic within the confidence scores was praised, the general communication of AI actions was seen as a distraction. Designers argued that highlighting past AI edits and using color highlighting was "overkill," pulling users from their immediate tasks rather than enhancing the workflow.

Positive Feedback on XAI:

However, the team praised the detailed logic in the confidence scores, affirming that this approach to explainable AI was critical for building researcher trust — especially to avoid “black box theory”.

Positive Feedback on XAI:
However, the team praised the detailed logic in the confidence scores, affirming that this approach to explainable AI was critical for building researcher especially to avoid “black box theory”.

Revelation: I Wasn't Just Redesigning a Tool

Revelation: I Wasn't Just Redesigning a Tool

  • My project was the very first private markets tool being designed for the new company-wide UDCP platform—PitchBook's most valuable business asset.

  • This meant my work went beyond a simple redesign; I was responsible for laying the foundational systems and design patterns that all future private market tools on the platform would follow.

  • This realization dramatically raised the stakes and my personal drive to deliver a bulletproof concept following the review.

  • My project was the very first private markets tool being designed for the new company-wide UDCP platform—PitchBook's most valuable business asset.

  • This meant my work went beyond a simple redesign; I was responsible for laying the foundational systems and design patterns that all future private market tools on the platform would follow.

  • This realization dramatically raised the stakes and my personal drive to deliver a bulletproof concept following the review.

Redesign: Back To Mid-Fidelity

Redesign: Back To Mid-Fidelity .

Redesign: Back To Mid-Fidelity.

I was advised to go back to mid-fidelity to speed up the ideation process and ensure the concept has bulletproof human-centered logic before moving to high-fidelity.

I started this off by mapping out user jobs to be done, and tying that to solutions.

I was advised to go back to mid-fidelity to speed up the ideation process and ensure the concept has bulletproof human-centered logic before moving to high-fidelity.

I started this off by mapping out user jobs to be done, and tying that to solutions.

New Flow and Ideation

New Flow and Ideation

Then a big idea hit me: instead of cramming features into a single view, I needed to create two distinct modes - one for speed, one for context which was connected by a universal navigation system that researchers could seamlessly switch between. The researchers’ tasks would confidence score cards, which would be exactly the same in both modes, and across the entire platform.

Borrowing the feature of the review mode, users would instantly know how many edits they need to review, and instantly get to work. Their subtasks would be the AI confidence scores, and the system would fully automate high-confidence edits, except for data-sensitive changes like rounds, exits, and funds which are critical for our PE users.

Then a big idea hit me: instead of cramming features into a single view, I needed to create two distinct modes - one for speed, one for context which was connected by a universal navigation system that researchers could seamlessly switch between. The researchers’ tasks would confidence score cards, which would be exactly the same in both modes, and across the entire platform.

Borrowing the feature of the review mode, users would instantly know how many edits they need to review, and instantly get to work. Their subtasks would be the AI confidence scores, and the system would fully automate high-confidence edits, except for data-sensitive changes like rounds, exits, and funds which are critical for our PE users.

There would be two main ways of completing these edits:

Review Mode: Hyper streamlined mode that only shows the AI confidence scores and doesn't have the contextual noise.

Edit mode: Contextual view where researchers can see the edit within the survey with an AI drawer containing the same exact subtasks that are in the review mode.

There would be two main ways of completing these edits:

Review Mode: Hyper streamlined mode that only shows the AI confidence scores and doesn't have the contextual noise.

Edit mode: Contextual view where researchers can see the edit within the survey with an AI drawer containing the same exact subtasks that are in the review mode.

This concept was ready: it had no logic holes, was deeply impactful, and perfectly matched the new UDCP design vision. I was ready to create mid-fidelity designs to iterate through these new designs.


This dual-workflow, AI-hybrid model was a significant innovation for PitchBook, as many competitors still rely on a fully manual data verification process

This concept was ready: it had no logic holes, was deeply impactful, and perfectly matched the new UDCP design vision. I was ready to create mid-fidelity designs to iterate through these new designs.

This dual-workflow, AI-hybrid model was a significant innovation for PitchBook, as many competitors still rely on a fully manual data verification process

Mid-fidelity & Strategic Alignment

Mid-fidelity & Strategic Alignment

In my mid-fidelity designs, the confidence scores were at the center of both of these modes. I redesigned them to make clear what was edited and provided a short, comprehensive AI rationale to assist decision‑making. Instead of showing all the overwhelming AI logic, I added a button to dive deeper—supporting XAI.

In my mid-fidelity designs, the confidence scores were at the center of both of these modes. I redesigned them to make clear what was edited and provided a short, comprehensive AI rationale to assist decision‑making. Instead of showing all the overwhelming AI logic, I added a button to dive deeper—supporting XAI.

Review Mode (Starting place)

Edit Mode (Contextual View)

Confidence Score with AI Logic Layer

Stakeholder and Design Review Validation

Stakeholder and Design Review Validation

With my mid-fidelity prototype ready, I needed to ensure this was impactful for the research team. I had a call with 6+ researchers, and the director of research in Mumbai. I presented the work, giving context and answering questions.

It was a huge success, and the researchers were genuinely excited over the new workflow. They were only concerned about the internal notes modal, and wanted me to ensure it contained all the necessary notes. Another thought was including an AI feedback loop, which I thought was brilliant.

With my mid-fidelity prototype ready, I needed to ensure this was impactful for the research team. I had a call with 6+ researchers, and the director of research in Mumbai. I presented the work, giving context and answering questions.

It was a huge success, and the researchers were genuinely excited over the new workflow. They were only concerned about the internal notes modal, and wanted me to ensure it contained all the necessary notes. Another thought was including an AI feedback loop, which I thought was brilliant.

To refine it even more, and showcase the design growth of the project, I presented at my 3rd design review with 20+ designers. Bottom-line, the direct stakeholders, and the design team were behind my designs and were impressed. With these new refinements, it was truly a bullet-proof concept ready for designing.

To refine it even more, and showcase the design growth of the project, I presented at my 3rd design review with 20+ designers. Bottom-line, the direct stakeholders, and the design team were behind my designs and were impressed. With these new refinements, it was truly a bullet-proof concept ready for designing.

AI/ML Engineering Showcase

AI/ML Engineering Showcase

With the AI/ML team, I suggested an AI model framework that’s consistent across UDCP. The system would query internal RTS databases and external APIs to cross-reference data points against established validation rules.

With the AI/ML team, I suggested an AI model framework that’s consistent across UDCP. The system would query internal RTS databases and external APIs to cross-reference data points against established validation rules.

These two factors are weighted to produce the final confidence score:

  • Source Accuracy: The AI scrapes internal (RTS) and external (news, websites) sources and assigns an accuracy rating to the data.

  • Methodology: The AI cross-references the edit with the strict, established methodologies that researchers are trained to follow. This would be quality check for ECS (the public tool)

These two factors are weighted to produce the final confidence score:

  • Source Accuracy: The AI scrapes internal (RTS) and external (news, websites) sources and assigns an accuracy rating to the data.

  • Methodology: The AI cross-references the edit with the strict, established methodologies that researchers are trained to follow. This would be quality check for ECS (the public tool)

Working with the AI/ML PM, we defined the technical architecture: the system processes survey documents through an orchestration layer called a 'Datapoint Dictionary' that applies business logic to identify relevant datapoints for confidence scoring. Success would be measured through Human in the Loop (HIL) feedback and accuracy testing against Golden Sets - standard ML validation datasets.

Working with the AI/ML PM, we defined the technical architecture: the system processes survey documents through an orchestration layer called a 'Datapoint Dictionary' that applies business logic to identify relevant datapoints for confidence scoring. Success would be measured through Human in the Loop (HIL) feedback and accuracy testing against Golden Sets - standard ML validation datasets.

High-Fidelity Execution

High-Fidelity Execution .

High-Fidelity Execution .

After securing alignment from all stakeholders, I presented the final designs in a product review to PitchBook's CPO, PM Directors, and Design Leadership. The vision was approved and adopted.

The core of the solution is a Dual-Mode Interface designed for two distinct user mindsets: a hyper-focused Review Mode for speed, and a context-heavy Edit Mode for deep analysis. The system intelligently protects sensitive data (like funding rounds) from full automation, always keeping the researcher in control of what matters most.

This project's impact went far beyond a single tool. As the first private markets tool on the new company-wide platform, my work established the foundational design patterns and scalable AI components that will be used across the entire private data ecosystem.

After securing alignment from all stakeholders, I presented the final designs in a product review to PitchBook's CPO, PM Directors, and Design Leadership. The vision was approved and adopted.

The core of the solution is a Dual-Mode Interface designed for two distinct user mindsets: a hyper-focused Review Mode for speed, and a context-heavy Edit Mode for deep analysis. The system intelligently protects sensitive data (like funding rounds) from full automation, always keeping the researcher in control of what matters most.

This project's impact went far beyond a single tool. As the first private markets tool on the new company-wide platform, my work established the foundational design patterns and scalable AI components that will be used across the entire private data ecosystem.

Review Mode

Review Mode

Review Mode is the high-efficiency "heads down" view, designed to help researchers process a high volume of edits as quickly as possible. The core design principle was to minimize cognitive load by showing only the most essential information needed to make a decision.

Review Mode is the high-efficiency "heads down" view, designed to help researchers process a high volume of edits as quickly as possible. The core design principle was to minimize cognitive load by showing only the most essential information needed to make a decision.

Default Review Mode

1. Task Submission Button

Submission button disabled until subtasks are done.


2. Edit Counter and AI Edit Toggle

Provides an at-a-glance view of progress and allows users to toggle visibility of all AI-automated edits.


3. AI Confidence Score Subtask

Each card streamlines decision-making by surfacing the proposed edit, AI rationale, and clear action CTAs.


4. View In Context (Switch to Edit Mode)

Allows seamless switching from rapid review to a context edit mode for deeper investigation.


5. Subtask Navigation

Enforces a single-task focus and minimizes cognitive load by allowing only one card to be open at a time.

1. Task Submission Button
Submission button disabled until subtasks are done.

2. Edit Counter and AI Edit Toggle
Provides an at-a-glance view of progress and allows users to toggle visibility of all AI-automated edits.

3. AI Confidence Score Subtask
Each card streamlines decision-making by surfacing the proposed edit, AI rationale, and clear action CTAs.

4. View In Context (Switch to Edit Mode)
Allows seamless switching from rapid review to a context edit mode for deeper investigation.

5. Subtask Navigation
Enforces a single-task focus and minimizes cognitive load by allowing only one card to be open at a time.

AI Automations Review Mode

1. Building AI Trust:

A simple icon makes AI actions obvious. The prominent "undo" button ensures the human is always in control.


2.Safe by Default: Disabled Action

Primary actions disabled on resolved AI edits, preventing accidental changes.


3. Secondary Buttons Available

Secondary actions remain active, allowing users to investigate the AI's work and logic.


4. Dynamic Edit Counter

Counter updates in real-time to show remaining tasks and flagged items, keeping researchers focused.


5. AI-Resolved Toggle

Toggle allows users to switch between active queue and AI edits, providing full transparency and control.

1. Building AI Trust:
A simple icon makes AI actions obvious. The prominent "undo" button ensures the human is always in control.

2.Safe by Default: Disabled Action
Primary actions disabled on resolved AI edits, preventing accidental changes.

3. Secondary Buttons Available
Secondary actions remain active, allowing users to investigate the AI's work and logic.

4. Dynamic Edit Counter
Counter updates in real-time to show remaining tasks and flagged items, keeping researchers focused.

5. AI-Resolved Toggle
Toggle allows users to switch between active queue and AI edits, providing full transparency and control.

Completed Review Mode

1. Building AI Trust:

A simple icon makes AI actions obvious. The prominent "undo" button ensures the human is always in control.


2.Safe by Default: Disabled Action

Primary actions disabled on resolved AI edits, preventing accidental changes.


3. Secondary Buttons Available

Secondary actions remain active, allowing users to investigate the AI's work and logic.


1. Building AI Trust:
A simple icon makes AI actions obvious. The prominent "undo" button ensures the human is always in control.

2.Safe by Default: Disabled Action
Primary actions disabled on resolved AI edits, preventing accidental changes.

3. Secondary Buttons Available
Secondary actions remain active, allowing users to investigate the AI's work and logic.

Edit Mode

Edit Mode

Edit Mode is the contextual counterpart to Review Mode. It allows researchers to "zoom in" on a specific data point and see it within the full survey, providing the necessary context to resolve complex or low-confidence edits.

Edit Mode is the contextual counterpart to Review Mode. It allows researchers to "zoom in" on a specific data point and see it within the full survey, providing the necessary context to resolve complex or low-confidence edits.

Edit Mode With Tasks

1. At-a-Glance Status

Check/dot system provides quick status of entire sections, allowing users to swiftly navigate to tasks.


2.Attention Funnel

Distinct purple badge, acts as a spotlight, drawing the user's eye to the exact field that needs attention.


3.Transparent AI Automation

Badge indicates AI automations. Clicking it opens resolved AI card, ensuring full transparency.


4. No More Scrolling

Drawer lets researchers jump between any required task instantly, eliminating slow, frustrating scrolling.


5. Maintaining Momentum

Showing the next task reduces cognitive load by signaling what's next without distraction.

1. At-a-Glance Status
Check/dot system provides quick status of entire sections, allowing users to swiftly navigate to tasks.

2.Attention Funnel
Distinct purple badge, acts as a spotlight, drawing the user's eye to the exact field that needs them.

3.Transparent AI Automation
Badge indicates AI automations. Clicking it opens resolved AI card, ensuring full transparency.

4. No More Scrolling
Drawer lets researchers jump between any required task instantly, eliminating slow, frustrating scrolling.

5. Maintaining Momentum
Showing the next task reduces cognitive load by signaling what's next without distraction.

Edit Mode Complete

1. Final Confirmation

Once all tasks are complete, the submission button becomes active, creating a clear final step for the user.


2.Small Edit Badges

Purple badge and icon for AI edits and a check mark for human edits differentiate between edit types.


3.Resolved Card

Resolved cards display action taken along with an undo button. Completed actions are greyed out.


4. Dynamic Edit Counter

Counter updates in real-time, keeping researchers focused on the next task within the contextual view.


4. AI-Resolved Toggle

Toggle allows users to switch AI‑resolved edits on or off, allowing view of resolved AI edits in the drawer.

1. Final Confirmation
Once all tasks are complete, the submission button becomes active, creating a clear final step for the user.

2.Small Edit Badges
Purple badge and icon for AI edits and a check mark for human edits differentiate between edit types.

3.Resolved Card
Resolved cards display action taken along with an undo button. Completed actions are greyed out.

4. Dynamic Edit Counter
Counter updates in real-time, keeping them focused on the next task within the contextual view of survey

4. AI-Resolved Toggle
Toggle allows users to switch AI resolved edits to show, allowing them to resolved AI edits in the drawer.

Confidence Score and Automation

Confidence Score and Automation

The AI Confidence Score card is the heart of the entire system. It’s the primary subtask within the survey and the core of the Explainable AI (XAI) strategy, designed to transform researchers from data entry clerks into strategic reviewers.

The AI Confidence Score card is the heart of the entire system. It’s the primary subtask within the survey and the core of the Explainable AI (XAI) strategy, designed to transform researchers from data entry clerks into strategic reviewers.

Default Confidence Score

Default Confidence Score

1. Instant Context

The header provides immediate context on the data type and the AI's confidence, allowing researchers to instantly triage and prioritize their effort.


2.Rapid Comprehension

A dead-simple 'Before/After' comparison, aided by color-coding, allows for lightning-fast understanding of the proposed change.


3.User in Control

The three primary actions (Accept, Reject, Flag) are always present, reinforcing the core principle that the human makes the final call.


4. Guided Verification

The rationale doesn't just explain the AI's logic; it provides researchers with guided resources to verify the data themselves, faster.


5. Investigation On-Demand.

Provides one-click access to deeper context, empowering researchers to solve complex edge cases with confidence.

1. Instant Context
The header provides immediate context on the data type and the AI's confidence, allowing researchers to instantly triage and prioritize their effort.

2.Rapid Comprehension
A dead-simple 'Before/After' comparison, aided by color-coding, allows for lightning-fast understanding of the proposed change.

3.User in Control
The three primary actions (Accept, Reject, Flag) are always present, reinforcing the core principle that the human makes the final call.

4. Guided Verification
The rationale doesn't just explain the AI's logic; it provides researchers with guided resources to verify the data themselves, faster.

5. Investigation On-Demand
Provides one-click access to deeper context, empowering researchers to solve complex edge cases with confidence.

Resolved Confidence Score

Resolved Confidence Score

1. Confident Reversibility

Clear UI confirmation plus a persistent "Undo" button means users can act quickly, knowing no decision is final until they submit.


2.Building Long-Term Trust:

Keeping the AI rationale visible even after resolution is a commitment to transparency that continually builds the user's trust in the system over time.


3.Preventing Accidental Edits

Disabling primary actions after resolution makes the interface safer and reduces the risk of accidentally changing a completed task.


4. Constant Access to Ground Truth

To ensure full transparency, these actions are always visible, providing a permanent path to the underlying data and AI logic.

1. Confident Reversibility
Clear UI confirmation plus a persistent "Undo" button means users can act quickly, knowing no decision is final until they submit.

2.Building Long-Term Trust:
Keeping the AI rationale visible even after resolution is a commitment to transparency that continually builds the user's trust in the system over time.

3.Preventing Accidental Edits
Disabling primary actions after resolution makes the interface safer and reduces the risk of accidentally changing a completed task.

4. Constant Access to Ground Truth
To ensure full transparency, these actions are always visible, providing a permanent path to the underlying data and AI logic.

Explainable AI (XAI) Component

Explainable AI (XAI) Component

1. Demystifying the AI

This is the heart of our Explainable AI (XAI) strategy. It breaks down the AI's "thinking" to build trust and empower the researcher.


2.Eliminating Manual Searches:

Providing direct links to the AI's sources empowers researchers to verify data with a single click, killing a major source of workflow inefficiency.

1. Demystifying the AI
This is the heart of our Explainable AI (XAI) strategy. It breaks down the AI's "thinking" to build trust and empower the researcher.

2.Eliminating Manual Searches:
Providing direct links to the AI's sources empowers researchers to verify data with a single click, killing a major source of workflow inefficiency.

RTS Notes Component

RTS Notes Component

1. Killing Context-Switching

Allowing users to read and add notes in-flow eliminates one of the most time-consuming and hated parts of the old workflow.


2.The Escape Hatch

While the goal is to keep users in-flow, this link to the legacy system provides a necessary escape hatch for deep-dive investigations, building user trust.


3.Intelligent Summaries

The component surfaces the most critical details from each note by default, giving researchers at-a-glance context without the noise.


4. Historical Context

Always showing the first and last notes for complex edits gives researchers an instant historical landscape to inform their decisions.


5. Power-User Efficiency

Hotkey support for adding notes allows power users to contribute their findings at the speed of thought, keeping the data rich and current.

1. Final Confirmation:
Once all tasks are complete, the submission button becomes active, creating a clear final step for the user.

2.Small Edit Badges: Purple badge and icon for AI edits and a check mark for human edits differentiate between edit types.

3.Resolved Card: Resolved cards display action taken along with an undo button. Completed actions are greyed out.

4. Dynamic Edit Counter: Counter updates in real-time, keeping them focused on the next task within the contextual view of survey

4. AI-Resolved Toggle: Toggle allows users to switch AI resolved edits to show, allowing them to resolved AI edits in the drawer.

Video Prototype: Researcher Task Example

Video Prototype: Researcher Task Example

Key Metrics and Reflections

Key Metrics and Reflections .

Key Metrics and Reflections .

To create a defensible projection for this 10-week project, I collaborated with my PM, Design Manager, and the Primary Data Sourcing Manager. We used official 2025 performance benchmarks (average 91 min/survey) to model the impact of the new AI-powered workflow. Our model, validated by product leadership, projects that my design vision will make the survey review process:

To create a defensible projection for this 10-week project, I collaborated with my PM, Design Manager, and the Primary Data Sourcing Manager. We used official 2025 performance benchmarks (average 91 min/survey) to model the impact of the new AI-powered workflow. Our model, validated by product leadership, projects that my design vision will make the survey review process:

2.6X Faster

Data operations

Data operations

1,300+ Hours

Saved monthly for PDS team

Saved monthly for PDS team

62% Less Time

For data upload and verification

For data upload and verification

Final design vision was presented to and approved by PitchBook's CPO, directors of product, and design leadership, securing its place on the official company roadmap.

Final design vision was presented to and approved by PitchBook's CPO, directors of product, and design leadership, securing its place on the official company roadmap.

Reflections

Reflections

Why I Choose Harder Problems

Why I Choose Harder Problems

This summer I pushed myself to take on the biggest challenge possible. I took the original “intern project requirements” and massively expanded on them based on user research insights, and the drive to tackle complex problems out of my expected skill level. My manager gave me the option to take on the massive responsibility, and I jumped on the opportunity. In the end, I delivered strong, impactful designs that majorly expanded my skills. But it’s these challenges I deeply value, and I’m looking for more.

This summer I pushed myself to take on the biggest challenge possible. I took the original “intern project requirements” and massively expanded on them based on user research insights, and the drive to tackle complex problems out of my expected skill level. My manager gave me the option to take on the massive responsibility, and I jumped on the opportunity. In the end, I delivered strong, impactful designs that majorly expanded my skills. But it’s these challenges I deeply value, and I’m looking for more.

Speaking Engineer, Thinking Business

Speaking Engineer, Thinking Business

Beyond designing, I focused on collaborating with technical partners and communicating the business impact behind my decisions. At PitchBook, I learned to successfully collaborate with technical roles and align on the business goal we were looking to achieve. Building AI architecture wasn’t a solo task; I needed to communicate with AI/ML teams, technical PMs, and other designers. This summer taught me how crucial design is for business, and how to communicate with leadership exactly why my designs will bring value to the company. I look forward to collaborating and basing my design work on business impact.

Beyond designing, I focused on collaborating with technical partners and communicating the business impact behind my decisions. At PitchBook, I learned to successfully collaborate with technical roles and align on the business goal we were looking to achieve. Building AI architecture wasn’t a solo task; I needed to communicate with AI/ML teams, technical PMs, and other designers. This summer taught me how crucial design is for business, and how to communicate with leadership exactly why my designs will bring value to the company. I look forward to collaborating and basing my design work on business impact.

AI Design Obsessed

AI Design Obsessed

This summer was a massive deep dive into AI design. Every day I would use various models to critique designs, learn about new advancements in AI (very meta), and stress-test the my design thinking. I learned to incorporate crucial designs tools in my workflow, like Uizard and Figma Make, to accelerate my ideation and skip low-fidelity to mid- fidelity to communicate ideas. Most importantly, I was shipping enterprise AI product designs. I researched AI design principles, worked closely with engineering to design AI architecture , and became incredibly passionate about the power and ethics of these models. My neuroscience background suddenly clicked with AI design principles, and it was fascinating to design these systems These experiences made me immensely passionate for AI/ML and I want to push the boundaries of what it can do for products and design.

This summer was a massive deep dive into AI design. Every day I would use various models to critique designs, learn about new advancements in AI (very meta), and stress-test the my design thinking. I learned to incorporate crucial designs tools in my workflow, like Uizard and Figma Make, to accelerate my ideation and skip low-fidelity to mid- fidelity to communicate ideas. Most importantly, I was shipping enterprise AI product designs. I researched AI design principles, worked closely with engineering to design AI architecture , and became incredibly passionate about the power and ethics of these models. My neuroscience background suddenly clicked with AI design principles, and it was fascinating to design these systems These experiences made me immensely passionate for AI/ML and I want to push the boundaries of what it can do for products and design.