Hey, I am Prasana Y Doshi

Equal parts curious, caffeinated, and crafty.

currently:diving into data pipelines at Munich Re
Scroll
Entry 001

About

Hi, I'm Prasana. I study Computer Science at the University of Waterloo and Business at Wilfrid Laurier University. I love building things that drive impact and help stakeholders. Currently, I am interning at Munich Re.

Outside of work, I enjoy exploring new AI tools, teaching scuba diving as a TA, or spending time at the gym. I also love keeping up with equities, investing, and the latest consumer tech.

Entry 002

Deep Dives

Insurance Limit Prediction Model

Context

At HSB (Munich Re), underwriters relied on manual heuristics to set insurance limits — a slow, inconsistent process that left money on the table and introduced risk.

Approach

Built a LightGBM ensemble model with SHAP explainability, fed by a cleaned pipeline of 50k+ policy records. Designed feature engineering around loss history, exposure metrics, and industry codes. Iterated weekly with underwriting stakeholders to calibrate outputs.

Outcome

Reduced prediction error from 19% MAPE to 7% MAPE — adopted by the underwriting team as the default recommendation engine.

MAPE improvement19% → 7%
Records processed50,000+
PythonLightGBMSHAPSQLAzure ML

Reddit Sentiment Analysis Platform

Context

Needed a way to gauge real-time public sentiment across subreddits for market and brand analysis — no existing internal tool supported this at scale.

Approach

Built a full-stack pipeline that ingests subreddit data via the Reddit API, runs inference through a fine-tuned DistilBERT model, and stores results in PostgreSQL. Created a React dashboard for trend visualization and filtering by subreddit, time range, and sentiment polarity.

Outcome

Deployed a working end-to-end platform capable of processing thousands of posts per hour with real-time sentiment scoring and interactive dashboards.

Throughput1,000+ posts/hr
ModelFine-tuned DistilBERT
PythonReactHuggingFacePostgreSQLFastAPI

Automated Reporting Pipeline

Context

Monthly reporting at HSB required analysts to manually pull data from multiple sources, format spreadsheets, and update Power BI dashboards — consuming 150+ person-hours monthly.

Approach

Designed an end-to-end Python pipeline integrating SQL Server, Azure Data Factory, and Power BI APIs. Built parameterized report templates and a scheduling layer that runs automatically on the first of each month.

Outcome

Saved 150+ person-hours per month and eliminated manual formatting errors. Reports now auto-generate and land in stakeholder inboxes by 9 AM.

Hours saved monthly150+
Error rate→ 0 manual errors
PythonSQL ServerAzurePower BIPandas
Entry 004

Transmissions

Career

A Generalist's Guide to Technical Interviews

When your background spans ML, full-stack, and quant — how do you prepare for interviews that want you to be a specialist? My approach after 30+ mock interviews.

View all posts
Entry 005

Away from the Screen

I am usually doing something that gets me out of routine. In the summers, that often means diving, and lately it also means working toward my private pilot licence. I have also spent a big part of my life around public speaking and theatre, with years of performing in regional plays shaping how I communicate and carry myself. Beyond that, I enjoy hiking, travelling, and exploring new places whenever I get the chance.

Entry 006

Coordinates