AI-powered PitchBook Private Data Pipeline Redesign
AI -Powered PitchBook Private Data Pipeline Redesign












PitchBook: Designing an AI-powered Data Upload and Verification Tool
PitchBook: Designing an AI-powered Data Upload and Verification Tool
At a Glance
I led the design of PitchBook’s first AI-assisted data operations system — successfully pitching it to Chief Product Officer for a place on the product roadmap.
My Role: Leading project design as Product Design Intern
Team: 1 Product Designer, 2 PMs, 1 Design Manager
Duration: 10-Week Sprint
Key Contributions: Leading AI Design Implementation, Owning End-to-End UX/UI, Systems & Workflow Optimization
Tools: Figma, Uizard AI, Figma Make, Lovable, Claude & Gemini, LucidChart, Jira & Confluence
Outcome: Designed a cornerstone AI tool for PitchBook’s new company-wide data platform, projected to make data intake operations:
At a Glance
I drove the AI design strategy for PitchBook’s first private data tool redesign — a CPO-backed initiative that laid the foundation for the company’s new platform and added to the official roadmap.
My Role: Product Design Intern, Expanded into AI Design Strategy & Product Design
Team: 1 Product Designer, 2 PMs, 1 Design Manager
Duration: 10-Week Sprint
Key Contributions: Driving a C-Suite Initiative, Leading AI Design Strategy, Owning End-to-End UX/UI, Systems & Workflow Optimization
Tools: Figma, Uizard AI, Figma Make, Lovable, Claude & Gemini, LucidChart, Jira & Confluence
Outcome: Designed a cornerstone AI tool for PitchBook’s new company-wide data platform, projected to make private data operations:
2.6X Faster
Data operations
Data operations
62% Less Time
For data upload and verification
For data upload and verification
Context
Context
Problem: After merging with Morningstar's public operations, PitchBook's data operations needed to adapt to new scale and complexity.
My Mission: Design a tool to drastically accelerate data pipeline and improve accuracy and reduce researchers cognitive burden, shifting their work from tedious data entry to high-impact strategic review.
Solution: I designed a AI data review system that automates simple data edits and guides complex edits through AI confidence scores, transforming their role from data entry to strategic oversight.
Problem: After merging with Morningstar's public operations, PitchBook's data operations needed to adapt to new scale and complexity.
My Mission: Design a tool to drastically accelerate data pipeline and improve accuracy and reduce researchers cognitive burden, shifting their work from tedious data entry to high-impact strategic review.
Solution: I designed a AI data review system that automates simple data edits and guides complex edits through AI confidence scores, transforming their role from data entry to strategic oversight.




Research and Discovery
Research and Discovery .
The Primary Data Sourcing (PDS) team plays a crucial role in ensuring that private data remains consistently updated and reliable, using a sophisticated research tool suite (RTS). The process of updating company profiles within the existing RTS system is complex and inefficient, primarily due to the manual steps required to transfer data between the Survey Portal Review Tool and RTS.
The Primary Data Sourcing (PDS) team plays a crucial role in ensuring that private data remains consistently updated and reliable, using a sophisticated research tool suite (RTS). The process of updating company profiles within the existing RTS system is complex and inefficient, primarily due to the manual steps required to transfer data between the Survey Portal Review Tool and RTS.
Survey Submission
Company completes and submits their data survey online or through email.
Researcher Analysis
PDS Researchers analyze edits and update the private data system (RTS)
Data on Platform
Private market data is updated and available to PitchBook clients.
Research and Discovery .
Key User Segment & Pain Points (From PRD)
Key User Segment & Pain Points (From PRD)
THE USER
THE USER
Primary Data Sourcing Researchers at PitchBook, the backbone of the company's data operations.
Primary Data Sourcing Researchers at PitchBook, the backbone of the company's data operations.
THEIR GOALS
THEIR GOALS
To enter and verify complex survey data with maximum efficiency and accuracy as quickly as possible.
To enter and verify complex survey data with maximum efficiency and accuracy as quickly as possible.
THE STAKES
THE STAKES
They operate under aggressive performance goals, such as processing 26 surveys per day.
They operate under aggressive performance goals, such as processing 26 surveys per day.
PAIN POINTS
PAIN POINTS
Workflow Inefficiency & Context Switching
Workflow Inefficiency & Context Switches
Constant context-switching between tools which wasted valuable time.
Constant context-switching between tools which wasted valuable time.
Lack of Clarity & Progress Tracking
Lack of Clarity & Progress Tracking
Difficulty tracking progress, causing researchers to re-check their work.
Difficulty tracking progress, causing researchers to re-check their work.
Low Confidence & Fear of Errors
Low Confidence & Fear of Errors
A persistent worry of making unrecoverable, data-damaging errors
A persistent worry of making unrecoverable, data-damaging errors
The “Aha” moment
The “Aha” moment
There was cognitive strain caused by the unstructured verification search across multiple sources, while trying to main accuracy and speed. The data verification process was inefficient for two key reasons:
There was cognitive strain caused by the unstructured verification search across multiple sources, while trying to main accuracy and speed. The data verification process was inefficient for two key reasons:
Cognitive Strain: Researchers experienced mental friction from having to search through unstructured internal notes and various external sources.
Time Consumption: This multi-source search was extremely time-consuming, delaying how quickly updated data reached end-users.
Cognitive Strain: Researchers experienced mental friction from having to search through unstructured internal notes and various external sources.
Time Consumption: This multi-source search was extremely time-consuming, delaying how quickly updated data reached end-users.
Ideation Phase
Ideation Phase .
Ideation Phase .
Ideation Goals:
Increase the speed of data edit uploads.
Boost accuracy during the editing process.
Improve efficiency by keeping researchers within a single tool.
Boost researcher confidence in the edits they make
Ideation Goals for initial designs:
Increase the speed of data edit uploads.
Boost accuracy during the editing process.
Improve efficiency by keeping researchers within a single tool.
Boost researcher confidence in the edits they make
The AI Visionary Solution: AI Automation and Verification
The AI Visionary Solution: AI Automation and Verification
After meeting with product leadership about AI, I realized our verification process was perfect for automation.
After meeting with product leadership about AI, I realized our verification process was perfect for automation.
How the AI Confidence Scores Work:
Automated Edits: If the confidence rating for an edit was high, the system would automatically approve it, requiring no human intervention.
Human Review: For edits with medium or low confidence, the system would flag them for manual verification by a researcher.
Guided Verification: The confidence score would provide its underlying logic and direct the user to helpful internal and external sources to speed up the review process.
Explainable AI (XAI): Researchers could see the exact sources the AI used to make its decision, ensuring transparency and building trust in the system.
How the AI Confidence Scores Work:-
Automated Edits: If the confidence rating for an edit was high, the system would automatically approve it, requiring no human intervention.
Human Review: For edits with medium or low confidence, the system would flag them for manual verification by a researcher.
Guided Verification: The confidence score would provide its underlying logic and direct the user to helpful internal and external sources to speed up the review process.
Explainable AI (XAI): Researchers could see the exact sources the AI used to make its decision, ensuring transparency and building trust in the system.
Projected to reduce edit time by over 50% through intelligent automation (calculated through existing performance metrics)
Projected to reduce edit time by an estimated 50% through intelligent automation (calculated through existing performance metrics)
















Strategic Pivot & Design Evolution
Strategic Pivot & Design Evolution .
Strategic Pivot & Design Evolution .
As these simple, high-impact solutions were getting finalized, a new company-wide initiative (UDCP) created a major engineering constraint. This new constraint changed the entirety of my internship — I shifted from a project of incremental improvements, to a complete AI-powered redesign for massive impact.
Even for the most basic features, the Ukraine engineering team estimated it would take 2 whole quarters to develop. These “quick wins” weren’t quick at all in reality. With this new data, it was clear that pursuing the high-impact AI features was the only logical path forward, as the phase 1 edits were no longer feasible.
My manager told me that AI features needed to be universal across the entire platform, and that my survey redesign had to contain these new designs. She gave me an out, asking if I wanted to focus on a simpler UI intern-level project and pass these senior-level responsibilities to a more experienced designer. I immediately said no and began working.
As these simple, high-impact solutions were getting finalized, a new company-wide initiative (UDCP) created a major engineering constraint. This new constraint changed the entirety of my internship — I shifted from a project of incremental improvements, to a complete AI-powered redesign for massive impact.
Even for the most basic features, the Ukraine engineering team estimated it would take 2 whole quarters to develop. These “quick wins” weren’t quick at all in reality. With this new data, it was clear that pursuing the high-impact AI features was the only logical path forward, as the phase 1 edits were no longer feasible.
My manager told me that AI features needed to be universal across the entire platform, and that my survey redesign had to contain these new designs. She gave me an out, asking if I wanted to focus on a simpler UI intern-level project and pass these senior-level responsibilities to a more experienced designer. I immediately said no and began working.
Expanded Mission
New Expanded Mission
Redesign the survey tool for UDCP, integrating the AI tools I've designed to support the tool
Lead the design of the AI Automation and Confidence Score tools across all of UDCP and get them approved by leadership
Define the technical implementation to scale these AI tools with the lead PM for AI/ML
Redesign the survey tool for UDCP, integrating the AI tools I've designed to support the tool
Lead the design of the AI Automation and Confidence Score tools across all of UDCP and get them approved by leadership
Define the technical implementation to scale these AI tools with the lead PM for AI/ML
For new designs, the main focus here though were the AI solutions. The current mid-fidelity AI designs I took over lived within a drawer and I had to redesign the survey with this in mind for UDCP. As I redesigned these tools, AI transparency was crucial as researchers needed to immediately know what had been automated by AI, and what exactly was expected of them.
For new designs, the main focus here though were the AI solutions. The current mid-fidelity AI designs I took over lived within a drawer and I had to redesign the survey with this in mind for UDCP. As I redesigned these tools, AI transparency was crucial as researchers needed to immediately know what had been automated by AI, and what exactly was expected of them.


Confidence Score Example


Confidence score drawer in ECS (a Public tool in UDCP)
A core principle of my design was ensuring researchers always felt in control, giving them the ability to undo any AI-automated edit and see the logic behind it (XAI), avoiding "black box" problems.
A core principle of my design was ensuring researchers always felt in control, giving them the ability to undo any AI-automated edit and see the logic behind it (XAI), avoiding "black box" problems.
Critical Design Review & The Bigger Picture
Critical Design Review & The Bigger Picture
During a review with 20+ designers, I asked specific questions and the team identified critical holes in the workflow and AI communication features:
During a review with 20+ designers, I asked specific questions and the team identified critical holes in the workflow and AI communication features:
Disconnected & Fragmented Workflow
This initial design failed because it lacked a clear, unified task zone. As the arrows illustrate, users were forced to jump between the top navigation (to see the summary and submit), the main content area (to see data in context), and the bottom drawer (to review the AI suggestions and navigate). This chaotic and fragmented path created high cognitive load and directly contradicted the primary user goal of a streamlined, efficient workflow.
Disconnected & Fragmented Workflow
This initial design failed because it lacked a clear, unified task zone. As the arrows illustrate, users were forced to jump between the top navigation (to see the summary and submit), the main content area (to see data in context), and the bottom drawer (to review the AI suggestions and navigate). This chaotic and fragmented path created high cognitive load and directly contradicted the primary user goal of a streamlined, efficient workflow.


Positive Feedback on XAI:
However, the team praised the detailed logic in the confidence scores, affirming that this approach to explainable AI was critical for building researcher trust — especially to avoid “black box theory”.
Positive Feedback on XAI:
However, the team praised the detailed logic in the confidence scores, affirming that this approach to explainable AI was critical for building researcher especially to avoid “black box theory”.


Want to see the full iteration process?
Want to see the full iteration process?
Redesign: Back to Mid-fidelity
Redesign: Back to Mid-fidelity .
Redesign: Back To Mid-Fidelity.
I went back to mid-fidelity to speed up the ideation process and ensure the concept has bulletproof human-centered logic before moving to high-fidelity.
I started this off by mapping out user jobs to be done, and tying that to solutions.
I went back to mid-fidelity to speed up the ideation process and ensure the concept has bulletproof human-centered logic before moving to high-fidelity.
I started this off by mapping out user jobs to be done, and tying that to solutions.






New Flow and Ideation
New Flow and Ideation
There would be two main ways of completing these edits:
Review Mode: Hyper streamlined mode that only shows the AI confidence scores and doesn't have the contextual noise.
Edit mode: Contextual view where researchers can see the edit within the survey with an AI drawer containing the same exact subtasks that are in the review mode.
There would be two main ways of completing these edits:
Review Mode: Hyper streamlined mode that only shows the AI confidence scores and doesn't have the contextual noise.
Edit mode: Contextual view where researchers can see the edit within the survey with an AI drawer containing the same exact subtasks that are in the review mode.
Mid-fidelity & Strategic Alignment
Mid-fidelity & Strategic Alignment



Review Mode (Starting place)



Edit Mode (Contextual View)




Confidence Score with AI Logic Layer
To refine it even more, and showcase the design growth of the project, I presented at my 3rd design review with 20+ designers. Bottom-line, the direct stakeholders, and the design team were behind my designs and were impressed. With these new refinements, it was truly a bullet-proof concept ready for designing.
To refine it even more, and showcase the design growth of the project, I presented at my 3rd design review with 20+ designers. Bottom-line, the direct stakeholders, and the design team were behind my designs and were impressed. With these new refinements, it was truly a bullet-proof concept ready for designing.
AI/ML Engineering Showcase
AI/ML Engineering Showcase
With the AI/ML team, I suggested an AI model framework that’s consistent across UDCP. The system would query internal RTS databases and external APIs to cross-reference data points against established validation rules.
With the AI/ML team, I suggested an AI model framework that’s consistent across UDCP. The system would query internal RTS databases and external APIs to cross-reference data points against established validation rules.
These two factors are weighted to produce the final confidence score:
Source Accuracy: The AI scrapes internal (RTS) and external (news, websites) sources and assigns an accuracy rating to the data.
Methodology: The AI cross-references the edit with the strict, established methodologies that researchers are trained to follow. This would be quality check for ECS (the public tool)
These two factors are weighted to produce the final confidence score:
Source Accuracy: The AI scrapes internal (RTS) and external (news, websites) sources and assigns an accuracy rating to the data.
Methodology: The AI cross-references the edit with the strict, established methodologies that researchers are trained to follow. This would be quality check for ECS (the public tool)
Working with the AI/ML PM, we defined the technical architecture: the system processes survey documents through an orchestration layer called a 'Datapoint Dictionary' that applies business logic to identify relevant datapoints for confidence scoring. Success would be measured through Human in the Loop (HIL) feedback and accuracy testing against Golden Sets - standard ML validation datasets.
Working with the AI/ML PM, we defined the technical architecture: the system processes survey documents through an orchestration layer called a 'Datapoint Dictionary' that applies business logic to identify relevant datapoints for confidence scoring. Success would be measured through Human in the Loop (HIL) feedback and accuracy testing against Golden Sets - standard ML validation datasets.
High-Fidelity Designs
High-Fidelity Execution .
High-Fidelity Execution .
The core of the solution is a Dual-Mode Interface designed for two distinct user mindsets: a hyper-focused Review Mode for speed, and a context-heavy Edit Mode for deep analysis. The system intelligently protects sensitive data (like funding rounds) from full automation, always keeping the researcher in control of what matters most. This vision was approved byt PitchBook's CPO, PM Directors, and Design Leadership following my final presentation, getting officially added to the product roadmap.
The core of the solution is a Dual-Mode Interface designed for two distinct user mindsets: a hyper-focused Review Mode for speed, and a context-heavy Edit Mode for deep analysis. The system intelligently protects sensitive data (like funding rounds) from full automation, always keeping the researcher in control of what matters most. This vision was approved byt PitchBook's CPO, PM Directors, and Design Leadership following my final presentation, getting officially added to the product roadmap.
Review Mode
Review Mode
Review Mode is the high-efficiency "heads down" view, designed to help researchers process a high volume of edits as quickly as possible. The core design principle was to minimize cognitive load by showing only the most essential information needed to make a decision.
Review Mode is the high-efficiency "heads down" view, designed to help researchers process a high volume of edits as quickly as possible. The core design principle was to minimize cognitive load by showing only the most essential information needed to make a decision.
Default Review Mode
1. Task Submission Button
Submission button disabled until subtasks are done.
2. Edit Counter and AI Edit Toggle
Provides an at-a-glance view of progress and allows users to toggle visibility of all AI-automated edits.
3. AI Confidence Score Subtask
Each card streamlines decision-making by surfacing the proposed edit, AI rationale, and clear action CTAs.
4. View In Context (Switch to Edit Mode)
Allows seamless switching from rapid review to a context edit mode for deeper investigation.
5. Subtask Navigation
Enforces a single-task focus and minimizes cognitive load by allowing only one card to be open at a time.
1. Task Submission Button
Submission button disabled until subtasks are done.
2. Edit Counter and AI Edit Toggle
Provides an at-a-glance view of progress and allows users to toggle visibility of all AI-automated edits.
3. AI Confidence Score Subtask
Each card streamlines decision-making by surfacing the proposed edit, AI rationale, and clear action CTAs.
4. View In Context (Switch to Edit Mode)
Allows seamless switching from rapid review to a context edit mode for deeper investigation.
5. Subtask Navigation
Enforces a single-task focus and minimizes cognitive load by allowing only one card to be open at a time.




Edit Mode
Edit Mode
Edit Mode is the contextual counterpart to Review Mode. It allows researchers to "zoom in" on a specific data point and see it within the full survey, providing the necessary context to resolve complex or low-confidence edits.
Edit Mode is the contextual counterpart to Review Mode. It allows researchers to "zoom in" on a specific data point and see it within the full survey, providing the necessary context to resolve complex or low-confidence edits.
Edit Mode With Tasks
1. At-a-Glance Status
Check/dot system provides quick status of entire sections, allowing users to swiftly navigate to tasks.
2.Attention Funnel
Distinct purple badge, acts as a spotlight, drawing the user's eye to the exact field that needs attention.
3.Transparent AI Automation
Badge indicates AI automations. Clicking it opens resolved AI card, ensuring full transparency.
4. No More Scrolling
Drawer lets researchers jump between any required task instantly, eliminating slow, frustrating scrolling.
5. Maintaining Momentum
Showing the next task reduces cognitive load by signaling what's next without distraction.
1. At-a-Glance Status
Check/dot system provides quick status of entire sections, allowing users to swiftly navigate to tasks.
2.Attention Funnel
Distinct purple badge, acts as a spotlight, drawing the user's eye to the exact field that needs them.
3.Transparent AI Automation
Badge indicates AI automations. Clicking it opens resolved AI card, ensuring full transparency.
4. No More Scrolling
Drawer lets researchers jump between any required task instantly, eliminating slow, frustrating scrolling.
5. Maintaining Momentum
Showing the next task reduces cognitive load by signaling what's next without distraction.




Confidence Score and Automation
Confidence Score and Automation
The AI Confidence Score card is the heart of the entire system. It’s the primary subtask within the survey and the core of the Explainable AI (XAI) strategy, designed to transform researchers from data entry clerks into strategic reviewers.
The AI Confidence Score card is the heart of the entire system. It’s the primary subtask within the survey and the core of the Explainable AI (XAI) strategy, designed to transform researchers from data entry clerks into strategic reviewers.
Default Confidence Score
Default Confidence Score


1. Instant Context
The header provides immediate context on the data type and the AI's confidence, allowing researchers to instantly triage and prioritize their effort.
2.Rapid Comprehension
A dead-simple 'Before/After' comparison, aided by color-coding, allows for lightning-fast understanding of the proposed change.
3.User in Control
The three primary actions (Accept, Reject, Flag) are always present, reinforcing the core principle that the human makes the final call.
4. Guided Verification
The rationale doesn't just explain the AI's logic; it provides researchers with guided resources to verify the data themselves, faster.
5. Investigation On-Demand.
Provides one-click access to deeper context, empowering researchers to solve complex edge cases with confidence.
1. Instant Context
The header provides immediate context on the data type and the AI's confidence, allowing researchers to instantly triage and prioritize their effort.
2.Rapid Comprehension
A dead-simple 'Before/After' comparison, aided by color-coding, allows for lightning-fast understanding of the proposed change.
3.User in Control
The three primary actions (Accept, Reject, Flag) are always present, reinforcing the core principle that the human makes the final call.
4. Guided Verification
The rationale doesn't just explain the AI's logic; it provides researchers with guided resources to verify the data themselves, faster.
5. Investigation On-Demand
Provides one-click access to deeper context, empowering researchers to solve complex edge cases with confidence.
Video Prototype: Researcher Task Example
Video Prototype: Researcher Task Example
Key Metrics and Impact
Key Metrics and Reflections .
Key Metrics and Reflections .
To create a defensible projection for this 10-week project, I collaborated with my PM, Design Manager, and the Primary Data Sourcing Manager. We used official 2025 performance benchmarks (average 91 min/survey) to model the impact of the new AI-powered workflow. Our model, validated by product leadership, projects that my design vision will make the survey review process:
To create a defensible projection for this 10-week project, I collaborated with my PM, Design Manager, and the Primary Data Sourcing Manager. We used official 2025 performance benchmarks (average 91 min/survey) to model the impact of the new AI-powered workflow. Our model, validated by product leadership, projects that my design vision will make the survey review process:
2.6X Faster
Data operations
Data operations
1,300+ Hours
Saved monthly for PDS team
Saved monthly for PDS team
62% Less Time
For data upload and verification
For data upload and verification
Final design vision was presented to and approved by PitchBook's CPO, directors of product, and design leadership, securing its place on the official company roadmap.
Final design vision was presented to and approved by PitchBook's CPO, directors of product, and design leadership, securing its place on the official company roadmap.