Hi, I'm Prasana. I study Computer Science at the University of Waterloo and Business at Wilfrid Laurier University. I love building things that drive impact and help stakeholders. Currently, I am interning at Munich Re.
Outside of work, I enjoy exploring new AI tools, teaching scuba diving as a TA, or spending time at the gym. I also love keeping up with equities, investing, and the latest consumer tech.
Entry 002
Deep Dives
Insurance Limit Prediction Model
Context
At HSB (Munich Re), underwriters relied on manual heuristics to set insurance limits — a slow, inconsistent process that left money on the table and introduced risk.
Approach
Built a LightGBM ensemble model with SHAP explainability, fed by a cleaned pipeline of 50k+ policy records. Designed feature engineering around loss history, exposure metrics, and industry codes. Iterated weekly with underwriting stakeholders to calibrate outputs.
Outcome
Reduced prediction error from 19% MAPE to 7% MAPE — adopted by the underwriting team as the default recommendation engine.
MAPE improvement19% → 7%
Records processed50,000+
PythonLightGBMSHAPSQLAzure ML
Reddit Sentiment Analysis Platform
Context
Needed a way to gauge real-time public sentiment across subreddits for market and brand analysis — no existing internal tool supported this at scale.
Approach
Built a full-stack pipeline that ingests subreddit data via the Reddit API, runs inference through a fine-tuned DistilBERT model, and stores results in PostgreSQL. Created a React dashboard for trend visualization and filtering by subreddit, time range, and sentiment polarity.
Outcome
Deployed a working end-to-end platform capable of processing thousands of posts per hour with real-time sentiment scoring and interactive dashboards.
Throughput1,000+ posts/hr
ModelFine-tuned DistilBERT
PythonReactHuggingFacePostgreSQLFastAPI
Automated Reporting Pipeline
Context
Monthly reporting at HSB required analysts to manually pull data from multiple sources, format spreadsheets, and update Power BI dashboards — consuming 150+ person-hours monthly.
Approach
Designed an end-to-end Python pipeline integrating SQL Server, Azure Data Factory, and Power BI APIs. Built parameterized report templates and a scheduling layer that runs automatically on the first of each month.
Outcome
Saved 150+ person-hours per month and eliminated manual formatting errors. Reports now auto-generate and land in stakeholder inboxes by 9 AM.
Most dev portfolios showcase technologies. But the best software I've built started from a frustration, not a framework. Here's how I think about building things that matter.
At 30 meters, you can't improvise. Every dive has a plan, a backup, and a checklist. It turns out, that's also how you build reliable data infrastructure.
When your background spans ML, full-stack, and quant — how do you prepare for interviews that want you to be a specialist? My approach after 30+ mock interviews.
I am usually doing something that gets me out of routine. In the summers, that often means diving, and lately it also means working toward my private pilot licence. I have also spent a big part of my life around public speaking and theatre, with years of performing in regional plays shaping how I communicate and carry myself. Beyond that, I enjoy hiking, travelling, and exploring new places whenever I get the chance.