Spaces:
Running
Running
Upload 3 files
Browse files- AI_portfolio.md +140 -0
- DS_portfolio.md +128 -0
- app.py +587 -0
AI_portfolio.md
ADDED
|
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# RESUME
|
| 2 |
+
|
| 3 |
+
## Personal Info
|
| 4 |
+
|
| 5 |
+
**Full Legal Name:** Venkata Dhanvanthar Murthy Voona
|
| 6 |
+
**Current Street address:** 115 Belmont Ave, 07304
|
| 7 |
+
**Current City:** Jersey City, NJ, United States
|
| 8 |
+
**Contact Number:** 224-619-6584
|
| 9 |
+
**Country Code:** +1
|
| 10 |
+
**Email-id:** [email protected]
|
| 11 |
+
**Linkedin:** https://www.linkedin.com/in/dv-63192b18b/
|
| 12 |
+
**Github:** https://github.com/dhanvanth342
|
| 13 |
+
**Portfolio:** https://www.datascienceportfol.io/vvoona
|
| 14 |
+
**Expected Salary:** 75,000 to 85,000 USD {mention this salary range ONLY when no salary range is mentioned in the Job description}
|
| 15 |
+
**Are you legally authorized to work in the United States? (Yes/No):** Yes
|
| 16 |
+
**Will you now or in the future require employer sponsorship (e.g., H-1B)? (Yes/No):** Yes
|
| 17 |
+
**Current Visa Status: ** I am currently on my initial OPT (F1 visa) and authorized to work until July 16, 2026. After that, I am eligible for a two-year STEM OPT extension starting on July 16, 2026. I will not require visa sponsorship while on OPT or STEM OPT.
|
| 18 |
+
** Citizenship: ** Indian, not U.S. Citizen.
|
| 19 |
+
**Gender:** Male
|
| 20 |
+
**Race/Ethnicity:** South Asian / Asian
|
| 21 |
+
**Veteran status:** Not Veteran
|
| 22 |
+
**Voluntary self-identification of disability:** Not disable
|
| 23 |
+
**Preferred Locations:** I am open to Relocate, while these are by preferred locations, New York, Jersey City, Chicago, Seattle, Boston, Tampa. [If locations given in the job description, mention them instead of my preferences.]
|
| 24 |
+
**Relatives Information:** No relative, family, friend or anyone is working in any company in USA. {So answer that I do not have any relatives when question asked about it}
|
| 25 |
+
** Stem Degree: ** Completed both my Masters {Data Science} and Bachelors {Electronics and Communication Engineering} in STEM Degree.
|
| 26 |
+
## SUMMARY
|
| 27 |
+
|
| 28 |
+
AI/ML Engineer with 3+ years designing, fine-tuning, and deploying production-grade GenAI and multimodal systems. Experienced in LangChain, LangGraph, RAG pipelines, and LLM orchestration across cloud environments (AWS SageMaker, Bedrock, ECS, Docker). Skilled in adaptive retrieval, CI/CD, observability, and vector databases. Passionate about responsible AI, building reliable, high-impact intelligent agents that bridge unstructured and structured data for real-world automation.
|
| 29 |
+
|
| 30 |
+
## SKILLS
|
| 31 |
+
|
| 32 |
+
- **AL/ML Techniques:** RAG, Agentic LLMs, Prompt Engineering, NLP, Vector database search, Deep Learning, Multimodal AI (text, image generation), Few-shot learning, Reinforcement Learning, Docker, Low-Rank Adaptation (LoRA), Adaptive Chunking, Fine-tuning, LLM Orchestration, Vertex AI, Neural Networks, Computer Vision, Pytorch, Tensorflow, PDF Extraction.
|
| 33 |
+
- **Tools and Frameworks:** LangChain, LangGraph, CrewAI, Pinecone, Quadrant Vector DB, Mistral OCR, Groq API, Hugging Face, Ollama, GPT, DeepSeek, Claude, OpenAI, Anthropic, Render, Git, CI/CD, TensorFlow, PyTorch, FastAPI, Streamlit, RESTful APIs.
|
| 34 |
+
- **Programming and Cloud Platforms:** Python, R, MATLAB, SQL, GCP, Amazon S3, Athena, Glue, EC2, ECS, Lex, Bedrock, SageMaker.
|
| 35 |
+
|
| 36 |
+
## WORK EXPERIENCE
|
| 37 |
+
|
| 38 |
+
### AI/ML Data Science Intern (Tenure: MAY 2025 β Present)
|
| 39 |
+
**AIDO, Chicago, Illinois**
|
| 40 |
+
About AIDO: AIDO presents itself as an AI-driven platform for international enrollment in higher education. According to their website, they help universities with automation, data insights and recruitment of international students.
|
| 41 |
+
My experience:
|
| 42 |
+
- Prototyped and productionized domain-specific AI agents for automated information gathering and insight generation using LLMs, FastAPI, and serverless workflows, enabling business teams to self-serve and reducing Data Engineering involvement by 60%.
|
| 43 |
+
- Engineered and optimized hybrid retrieval pipelines with adaptive chunking, searching strategies, improving retrieval accuracy by 40% across 150k+ samples, supporting high-performance insight generation at scale.
|
| 44 |
+
- Streamlined production workflows by deploying authenticated ML pipelines on AWS using Bedrock for model orchestration, SageMaker for training and inference, and ECS, Docker, Git for scalable deployment, enabling secure access, CI/CD automation, and faster model releases.
|
| 45 |
+
- Led cross-functional collaboration with business and platform teams to design, iterate, and deploy AI solutions, driving end-to-end integration, observability, and continuous improvement on cloud infrastructure.
|
| 46 |
+
- Applied advanced ML techniques including unsupervised anomaly detection, time-series forecasting, and semantic search to identify unusual patterns in the enrolment data, document automation, and fraud monitoring.
|
| 47 |
+
- Mentored junior engineers and standardized internal LangChain templates, accelerating prototype delivery and improving GenAI development workflow
|
| 48 |
+
|
| 49 |
+
### Machine Learning Engineer Intern (Tenure: JAN β APR 2025)
|
| 50 |
+
**WizLab, Chicago, Illinois**
|
| 51 |
+
About WizLab: WizLab is an EdTech startup focused on K-12 education: they build a platform to help teachers generate personalised learning materials and support differentiated instruction.
|
| 52 |
+
My experience:
|
| 53 |
+
- Engineered LLM orchestration using LangChain and LangGraph to automate lesson-plan and presentation-slide generation across models (LLaMA, GPT, DeepSeek, Claude), achieving 75% faster development versus prior custom workflows
|
| 54 |
+
- Finetuned T5 and GPT models for high-performance, low-cost data and insight generation (100k+ sample set, 60% cost savings), and established rigorous validation and monitoring to ensure reliability.
|
| 55 |
+
- Drove integration and deployment of AI agents on cloud platforms, collaborating with product and engineering teams for scalable, observable, and high-availability solutions, and actively incorporating latest AI advancements into production.
|
| 56 |
+
|
| 57 |
+
### AI Data Scientist (Tenure: JAN 2022 β FEB 2024)
|
| 58 |
+
**Liminal XR Solutions, Mumbai, India**
|
| 59 |
+
About Liminal: Liminal XR Solutions is an Indian agency based in Mumbai specializing in extended reality (XR) services: augmented reality (AR), virtual reality (VR), mixed reality (MR) and web-XR. They handled clients HP, Capgemini, Hero to work on customer journey in their product pages of websites.
|
| 60 |
+
My experience:
|
| 61 |
+
- Developed an AI-powered invitation generator using LangChain, Groq API, and LLaMA models, automating personalized professional invitation text creation and reducing manual effort by 70%.
|
| 62 |
+
- Engineered a dual-stage extraction pipeline by integrating Docling (text, layout) and olmOCR (tables), then incorporated a validation layer, achieving 98% extraction accuracy, significantly surpassing conventional OCR tools for pdf extraction.
|
| 63 |
+
- Built an AI Agentic RAG framework with Quadrant Vector DB for a restaurant recommender system using client-sourced data.
|
| 64 |
+
- Introduced a response validation agent, reducing hallucination rates from 9.6% to 3.2% and decreasing irrelevant responses from 96 to 32 per 1,000 test samples.
|
| 65 |
+
- Executed intent classification using knowledge distillation by training a lightweight DistilBERT model on the CLINC150 dataset, replacing LLaMA 2 to reduce inference costs by 60% while maintaining high accuracy in Vertex AI.
|
| 66 |
+
- Fine-tuned the Llama-2-7B model using Reinforcement Learning from Human Feedback (RLHF), increasing alignment of invitations with desired tone and input features by 25%, enhancing customer engagement.
|
| 67 |
+
- Deployed Low-Rank Adaptation (LoRA) during fine-tuning, reducing trainable parameters by 88%, cutting GPU memory usage by 75%, and accelerating training time by 60%.
|
| 68 |
+
- Implemented model evaluation, tracking, and retraining workflows using MLflow and Airflow, reducing deployment lead time and improving model stability.
|
| 69 |
+
|
| 70 |
+
### Machine Learning Research Intern (Tenure: JUN 2021β FEB 2022)
|
| 71 |
+
**Samsung, Prism, Chennai**
|
| 72 |
+
About Prism Program at Samsung: The Samsung PRISM program (full name: Samsung PRISM (Preparing and Inspiring Student Minds)) is an industryβacademia initiative launched by Samsungβs Bangalore R&D centre (SRI-B) to engage engineering college students and faculty in real R&D projects across topics like AI, machine learning (ML), IoT and 5G. Out of 4000 students competed from my university, I was one among the 60 students they have selected for this program.
|
| 73 |
+
My experience:
|
| 74 |
+
- Researched on "AI Based Reflection Scene Category Classification" in Pytorch environment and achieved an accuracy of 93% in identifying different types of reflections in real time for a dataset of 100,000 plus images.
|
| 75 |
+
- Applied pruning technique to reduce inference time by 60%, achieving a 93% classification accuracy, comparable to state of-the-art (SOTA) models [EfficientNet, DenseNet, MobileNetV3] for real-time reflection scene classification
|
| 76 |
+
|
| 77 |
+
## PROJECTS
|
| 78 |
+
|
| 79 |
+
### Enhancing RAG: unstructured data Extraction and Vector DB Evaluation (Timeline: MAY 2025 - Present)
|
| 80 |
+
|
| 81 |
+
- Developed an end-to-end framework using Docling to extract content from PDFs and images, including hierarchical metadata such as headings, titles, and table names; upserted data into a self-hosted Qdrant collection, improving PDF querying accuracy by 25% and reducing retrieval and storage costs by 100%.
|
| 82 |
+
- Researched, designed, and developed an evaluation framework to assess relevance of retrieved chunks across various vector databases and identify the optimal upsertion method; framework leverages bagging of metrics including cosine similarity, BERTScore, context relevance, and faithfulness.
|
| 83 |
+
- Currently developing a novel chunk upsertion method focused on dense and partial sparse embeddings, projected to reduce overall sparse embedding storage by 95%.
|
| 84 |
+
|
| 85 |
+
### Text2Block (Timeline: OCT 2024 - OCT 2025)
|
| 86 |
+
|
| 87 |
+
- Designed and launched Text2Block, a GenAI application transforming plain text into AI-driven flowchart visualizations with noise-free embedded text for enhanced clarity, attracting 450 unique users and processing 3,000 requests within first two weeks.
|
| 88 |
+
- Optimized system prompts for LLMs, increasing structured output accuracy by 40% using few-shot learning. Integrated Agentic LLMs in LangGraph for response evaluation and content regeneration, improving relevance to user requirements.
|
| 89 |
+
|
| 90 |
+
### Deep Learning Research - Nanyang Technological University, Singapore (Timeline: AUG 2022 β AUG 2023)
|
| 91 |
+
|
| 92 |
+
- Fine-tuned CNN models, including VGGNet, ResNet, and EfficientNet, by unfreezing 20% of the neural network layers for training on the constructed dataset, resulting in a 15% improvement in test accuracy by optimizing model weights specific to the dataset.
|
| 93 |
+
- Implemented a novel framework for academic emotion classification using VGG-19 as a feature extractor and Multi-Layer Perceptron (MLP) as a classifier, achieving 82.73% classification accuracy on the test set after performing 5-fold cross-validation for complex emotions such as boredom and frustration (Conference Paper).
|
| 94 |
+
- Developed a multimodal analysis pipeline for emotion recognition in memes, achieving 89% accuracy by integrating Optical Character Recognition (OCR) for text extraction and hyperparameter tuning for image analysis in TensorFlow/Keras
|
| 95 |
+
|
| 96 |
+
## EDUCATION
|
| 97 |
+
|
| 98 |
+
### Master of Data Science (Timeline: AUG 2023 - MAY 2025)
|
| 99 |
+
**Major of Study in Computer Science and Mathematics**
|
| 100 |
+
**Illinois Institute of Technology, Chicago, IL**
|
| 101 |
+
|
| 102 |
+
- Graduate Pathway Scholarship Awardee ($ 10,000) awarded for Academic Merit at Illinois Institute of Technology.
|
| 103 |
+
- Coursework β Machine Learning, Database Organization, Generative AI, Data Preparation & Analysis, Big Data Technologies.
|
| 104 |
+
|
| 105 |
+
### Bachelor of Technology in Electronics and Communication Engineering (Timeline: July 2019 - APR 2023)
|
| 106 |
+
**Vellore Institute of Technology, Chennai, TN**
|
| 107 |
+
|
| 108 |
+
- Published a research paper in fields of Deep Learning, Computer Vision, and NLP in journal Applied Sciences.
|
| 109 |
+
|
| 110 |
+
## Additional Info
|
| 111 |
+
|
| 112 |
+
### RESEARCH
|
| 113 |
+
|
| 114 |
+
#### Over-volume vehicle classification using Deep CNN models (Timeline: DEC 2021- MAY 2022)
|
| 115 |
+
|
| 116 |
+
- Collected real-time image data and performed image classification to keep a check on over-volume vehicles in the absence of human surveillance to assist the commuters in rural and terrain areas.
|
| 117 |
+
- Noticed an increase in performance by 12% after performing transfer learning, fine tuning and hyper-parameter tuning.
|
| 118 |
+
- Achieved an accuracy of 96% using EfficientNet model and published our work in applied sciences journal.
|
| 119 |
+
|
| 120 |
+
**Tech Stack Used:** Deep learning, Image augmentation, Computer Vision, Pytorch, Tensorflow, Team Leader, Neural Network, hyperparameter tuning, Google Colab, TPU Training, K-fold Cross Validation
|
| 121 |
+
|
| 122 |
+
#### Nanyang Technological University (NTU) (Timeline: AUG 2022 - AUG 2023)
|
| 123 |
+
|
| 124 |
+
- Conducted a multimodal analysis on memes, achieving a 89% accuracy in emotion recognition by integrating OCR for text extraction and deep learning for image analysis within a TensorFlow environment.
|
| 125 |
+
- Led research on facial emotion recognition for classroom applications, utilizing advanced computer vision techniques to enhance real-time student engagement monitoring.
|
| 126 |
+
- Developed and deployed a first-of-its-kind classroom emotion dataset, reducing cost of data collection by 100% through efficient web scraping, landmark detection, and facial unit mapping processes.
|
| 127 |
+
- Achieved a significant classification accuracy of 82.73% for complex emotions like boredom and frustration in classroom settings by integrating dataset with state-of-the-art models using TensorFlow/Keras.
|
| 128 |
+
- Published the research at The European Conference on Education 2023 Official Conference Proceedings.
|
| 129 |
+
|
| 130 |
+
**Tech Stack Used:** Deep learning, Image augmentation, Data Collection, Web Scraping, Model Building from Scratch, Computer Vision, NLP, Multi-modal training, Pytorch, Tensorflow, Team Leader, Neural Networks, hyperparameter tuning, Google Colab, TPU Training, K-fold Cross Validation, AWS sagemaker.
|
| 131 |
+
|
| 132 |
+
### Further Clarifying information about myself I would like to provide to be considered in this application
|
| 133 |
+
|
| 134 |
+
Beyond what's listed on my resume, I bring a mindset that blends technical depth with practical execution. I approach problems by thinking like a systems builderβalways asking how a model, pipeline, or orchestration setup will scale, integrate, and drive measurable impact.
|
| 135 |
+
|
| 136 |
+
I'm naturally curious and tend to reverse-engineer processes until I understand not just how something works, but why it behaves that way. This trait has helped me bridge gaps between data science experimentation and real-world production constraintsβespecially when dealing with cloud environments, large-scale data, and evolving business needs.
|
| 137 |
+
|
| 138 |
+
I also take ownership end-to-end: from designing data collection logic and validation checks to deploying APIs or RAG-based systems that are production-ready. I'm comfortable working through ambiguity, questioning assumptions, and simplifying complex systems into actionable solutions.
|
| 139 |
+
|
| 140 |
+
Lastly, I genuinely enjoy translating technical outputs into insights that make sense to stakeholders. That balance of analytical rigor and communication is something I practice deliberately, and it's become one of my defining professional habits.
|
DS_portfolio.md
ADDED
|
@@ -0,0 +1,128 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# RESUME
|
| 2 |
+
|
| 3 |
+
## Personal Info
|
| 4 |
+
|
| 5 |
+
**Full Legal Name:** Venkata Dhanvanthar Murthy Voona
|
| 6 |
+
**Current Street address:** 115 Belmont Ave, 07304
|
| 7 |
+
**Current City:** Jersey City, NJ, United States
|
| 8 |
+
**Contact Number:** 224-619-6584
|
| 9 |
+
**Country Code:** +1
|
| 10 |
+
**Email-id:** [email protected]
|
| 11 |
+
**Linkedin:** https://www.linkedin.com/in/dv-63192b18b/
|
| 12 |
+
**Github:** https://github.com/dhanvanth342
|
| 13 |
+
**Portfolio:** https://www.datascienceportfol.io/vvoona
|
| 14 |
+
**Expected Salary:** 75,000 to 85,000 USD {mention this salary range ONLY when no salary range is mentioned in the Job description}
|
| 15 |
+
**Are you legally authorized to work in the United States? (Yes/No):** Yes
|
| 16 |
+
**Current Visa Status: ** I am currently on my initial OPT (F1 visa) and authorized to work until July 16, 2026. After that, I am eligible for a two-year STEM OPT extension starting on July 16, 2026. I will not require visa sponsorship while on OPT or STEM OPT.
|
| 17 |
+
** Citizenship: ** Indian, not U.S. Citizen.
|
| 18 |
+
**Will you now or in the future require employer sponsorship (e.g., H-1B)? (Yes/No):** Yes
|
| 19 |
+
**Gender:** Male
|
| 20 |
+
**Race/Ethnicity:** South Asian / Asian
|
| 21 |
+
**Veteran status:** Not Veteran
|
| 22 |
+
**Voluntary self-identification of disability:** Not disable
|
| 23 |
+
**Preferred Locations:** I am open to Relocate, while these are by preferred locations, New York, Jersey City, Chicago, Seattle, Boston, Tampa. [If locations given in the job description, mention them instead of my preferences.]
|
| 24 |
+
**Relatives Information:** No relative, family, friend or anyone is working in any company in USA. {So answer that I do not have any relatives when question asked about it}
|
| 25 |
+
** Stem Degree: ** Completed both my Masters {Data Science} and Bachelors {Electronics and Communication Engineering} in STEM Degree.
|
| 26 |
+
## SUMMARY
|
| 27 |
+
|
| 28 |
+
Detail-oriented and innovative Data Scientist with over 3 years of experience delivering end-to-end analytics solutions across marketing, customer behavior, and AI-driven product development. Experienced in generative AI frameworks and retrieval-augmented workflows to enhance content reliability and user engagement. Skilled at building scalable data pipelines using Spark and Hive and deploying machine-learning models such as XGBoost and neural networks in Sagemaker.
|
| 29 |
+
|
| 30 |
+
## SKILLS
|
| 31 |
+
|
| 32 |
+
- **Programming & Cloud Services:** Python, R, MATLAB, SQL, GCP, Amazon S3, Sagemaker, Athena, Glue, EC2, ECS, Bedrock.
|
| 33 |
+
- **Data Engineering & ETL:** Apache Hive, Fast API, Excel, ETL pipeline development, data integration (batch & streaming), Amazon Redshift, Event Bridge, Apache Spark, Apache Hadoop, Data Bricks, Azure Data Factory, GCP Dataflow, Git.
|
| 34 |
+
- **Modeling & Analytics:** Predictive modeling & statistical analysis (regression, A/B testing), machine learning (scikit-learn, XGBoost), deep learning (TensorFlow, PyTorch), NLP (NLTK), GenerativeAI (Langchain, LangGraph, CrewAI), Computer Vision
|
| 35 |
+
- **Visualization & BI Platforms:** Power BI (DAX, Power Query), Tableau, Google Charts, dashboard design & storytelling, reporting automation (Excel macros, SQL Reporting Services)
|
| 36 |
+
|
| 37 |
+
## WORK EXPERIENCE
|
| 38 |
+
|
| 39 |
+
### AI/ML Data Scientist (MAY 2025 β Present)
|
| 40 |
+
**AIDO, Chicago, Illinois**
|
| 41 |
+
About AIDO: AIDO presents itself as an AI-driven platform for international enrollment in higher education. According to their website, they help universities with automation, data insights and recruitment of international students.
|
| 42 |
+
My experience:
|
| 43 |
+
- Prototyped and productionized domain-specific AI agents for automated information gathering and insight generation using LLMs, FastAPI, and serverless workflows, enabling business teams to self-serve and reducing Data Engineering involvement by 60%.
|
| 44 |
+
- Engineered and optimized hybrid retrieval pipelines with adaptive chunking, searching strategies, improving retrieval accuracy by 40% across 150k+ samples, supporting high-performance insight generation at scale.
|
| 45 |
+
- Led cross-functional collaboration with business and platform teams to design, iterate, and deploy AI solutions, driving end-to-end integration, observability, and continuous improvement on cloud infrastructure.
|
| 46 |
+
|
| 47 |
+
### Data Scientist (JAN β MAY 2025)
|
| 48 |
+
**LabelMaster, Chicago, Illinois**
|
| 49 |
+
About LabelMaster: Labelmaster manages a B2B customer base β shippers, manufacturers, and logistics partners who rely on them for dangerous goods compliance tools, packaging materials, and software services. These customers often interact across multiple channels (website, CRM, email campaigns, trade shows, and training).
|
| 50 |
+
My experience:
|
| 51 |
+
- Deployed Apache Spark to merge 2 million rows of web analytics with CRM datasets, accelerating data pipeline performance by 40% and enriching customer behaviour insights.
|
| 52 |
+
- Conducted SEO analytics with Pandas, NumPy, and SQL to evaluate ad campaign performance, reallocating budgets to top-performing campaigns, leading to a 15% boost in clicks and impressions, and enhancing online visibility and engagement.
|
| 53 |
+
- Executed K-means for customer segmentation on 250K accounts using sales, email, and web interaction data, developing tailored marketing strategies that reduced costs by 28%, increased sales by 12%, and enhanced email performance by 23%.
|
| 54 |
+
|
| 55 |
+
### Data Scientist (JAN 2022 β FEB 2024)
|
| 56 |
+
**Liminal XR Solutions, Mumbai, India**
|
| 57 |
+
About Liminal: Liminal XR Solutions is an Indian agency based in Mumbai specializing in extended reality (XR) services: augmented reality (AR), virtual reality (VR), mixed reality (MR) and web-XR. They handled clients HP, Capgemini, Hero to work on customer journey in their product pages of websites.
|
| 58 |
+
My experience:
|
| 59 |
+
- Analysed customer data with Python and Power BI, identifying a new market opportunity that expanded customer base by 10% through development of a new product line.
|
| 60 |
+
- Utilized Apache Hive to execute SQL-like queries on HDFS-stored data, optimizing partitioning strategies to cut query execution time by 50% when analyzing datasets with 600K plus records.
|
| 61 |
+
- Built an AI Agentic RAG framework with Quadrant Vector DB for a restaurant recommender system using client-sourced data.
|
| 62 |
+
- Developed a sentiment analysis pipeline with Python to examine user feedback from XR applications, pinpointing key areas for improvement and driving data-driven enhancements that boosted user engagement by 15% within 6 months.
|
| 63 |
+
- Built, trained, and deployed machine learning models (Random Forest, XGBoost) in AWS SageMaker for customer churn prediction, enabling proactive retention strategies that improved customer retention by 20%.
|
| 64 |
+
|
| 65 |
+
### Machine Learning Research Intern (JUN 2021β FEB 2022)
|
| 66 |
+
**Samsung, Prism, Chennai**
|
| 67 |
+
About Prism Program at Samsung: The Samsung PRISM program (full name: Samsung PRISM (Preparing and Inspiring Student Minds)) is an industryβacademia initiative launched by Samsungβs Bangalore R&D centre (SRI-B) to engage engineering college students and faculty in real R&D projects across topics like AI, machine learning (ML), IoT and 5G. Out of 4000 students competed from my university, I was one among the 60 students they have selected for this program.
|
| 68 |
+
My experience:
|
| 69 |
+
- Researched on "AI Based Reflection Scene Category Classification" in Pytorch environment and achieved an accuracy of 93% in identifying different types of reflections in real time for a dataset of 100,000 plus images.
|
| 70 |
+
- Built a hybrid neural network and applied pruning technique to reduce inference time by 60%, achieving a 93% classification accuracy, comparable to state of-the-art (SOTA) models [EfficientNet, MobileNetV3] for real-time reflection scene classification.
|
| 71 |
+
|
| 72 |
+
## PROJECTS
|
| 73 |
+
|
| 74 |
+
### Enhancing RAG: unstructured data Extraction and Vector DB Evaluation (Timeline: MAY 2025 - Present)
|
| 75 |
+
|
| 76 |
+
- Developed an end-to-end framework using Docling to extract content from PDFs and images, including hierarchical metadata such as headings, titles, and table names; upserted data into a self-hosted Qdrant collection, improving PDF querying accuracy by 25% and reducing retrieval and storage costs by 100%.
|
| 77 |
+
- Researched, designed, and developed an evaluation framework to assess relevance of retrieved chunks across various vector databases and identify the optimal upsertion method; framework leverages bagging of metrics including cosine similarity, BERTScore, context relevance, and faithfulness.
|
| 78 |
+
- Currently developing a novel chunk upsertion method focused on dense and partial sparse embeddings, projected to reduce overall sparse embedding storage by 95%.
|
| 79 |
+
|
| 80 |
+
### Text2Block (Timeline: OCT 2024 - OCT 2025)
|
| 81 |
+
|
| 82 |
+
- Designed and launched Text2Block, a GenAI application transforming plain text into AI-driven flowchart visualizations with noise-free embedded text for enhanced clarity, attracting 450 unique users and processing 3,000 requests within first two weeks.
|
| 83 |
+
- Integrated GA4 with GTM tags for A/B testing to assess user preferences, increasing website interactions by 45%.
|
| 84 |
+
- Streamlined RAG pipeline by orchestrating a chain of LLMs within LangChain framework to generate Python programming course materials, reducing hallucination in generated content by 66%.
|
| 85 |
+
|
| 86 |
+
## EDUCATION
|
| 87 |
+
|
| 88 |
+
### Master of Data Science (Timeline: AUG 2023 - MAY 2025)
|
| 89 |
+
**Major of Study in Computer Science and Mathematics**
|
| 90 |
+
**Illinois Institute of Technology, Chicago, IL**
|
| 91 |
+
|
| 92 |
+
- Graduate Pathway Scholarship Awardee ($ 10,000) awarded for Academic Merit at Illinois Institute of Technology.
|
| 93 |
+
- Coursework β Machine Learning, Database Organization, Generative AI, Data Preparation & Analysis, Big Data Technologies.
|
| 94 |
+
|
| 95 |
+
### Bachelor of Technology in Electronics and Communication Engineering (Timeline: July 2019 - APR 2023)
|
| 96 |
+
**Vellore Institute of Technology, Chennai, TN**
|
| 97 |
+
|
| 98 |
+
- Published a research paper in fields of Deep Learning, Computer Vision, and NLP in journal Applied Sciences.
|
| 99 |
+
|
| 100 |
+
## Additional Info
|
| 101 |
+
|
| 102 |
+
### RESEARCH
|
| 103 |
+
|
| 104 |
+
#### Over-volume vehicle classification using Deep CNN models (Timeline: DEC 2021- MAY 2022)
|
| 105 |
+
|
| 106 |
+
- Collected real-time image data and performed image classification to keep a check on over-volume vehicles in the absence of human surveillance to assist the commuters in rural and terrain areas.
|
| 107 |
+
- Noticed an increase in performance by 12% after performing transfer learning, fine tuning and hyper-parameter tuning.
|
| 108 |
+
- Achieved an accuracy of 96% using EfficientNet model and published our work in applied sciences journal.
|
| 109 |
+
|
| 110 |
+
**Tech Stack Used:** Deep learning, Image augmentation, Computer Vision, Pytorch, Tensorflow, Team Leader, Neural Network, hyperparameter tuning, Google Colab, TPU Training, K-fold Cross Validation
|
| 111 |
+
|
| 112 |
+
#### Nanyang Technological University (NTU) (Timeline: AUG 2022 - AUG 2023)
|
| 113 |
+
|
| 114 |
+
- Conducted a multimodal analysis on memes, achieving a 89% accuracy in emotion recognition by integrating OCR for text extraction and deep learning for image analysis within a TensorFlow environment.
|
| 115 |
+
- Led research on facial emotion recognition for classroom applications, utilizing advanced computer vision techniques to enhance real-time student engagement monitoring.
|
| 116 |
+
- Developed and deployed a first-of-its-kind classroom emotion dataset, reducing cost of data collection by 100% through efficient web scraping, landmark detection, and facial unit mapping processes.
|
| 117 |
+
- Achieved a significant classification accuracy of 82.73% for complex emotions like boredom and frustration in classroom settings by integrating dataset with state-of-the-art models using TensorFlow/Keras.
|
| 118 |
+
- Published the research at The European Conference on Education 2023 Official Conference Proceedings.
|
| 119 |
+
|
| 120 |
+
**Tech Stack Used:** Deep learning, Image augmentation, Data Collection, Web Scraping, Model Building from Scratch, Computer Vision, NLP, Multi-modal training, Pytorch, Tensorflow, Team Leader, Neural Networks, hyperparameter tuning, Google Colab, TPU Training, K-fold Cross Validation, AWS sagemaker.
|
| 121 |
+
|
| 122 |
+
### Further Clarifying information about myself I would like to provide to be considered in this application
|
| 123 |
+
|
| 124 |
+
Beyond what's listed on my resume, I'd highlight my adaptability and drive to quickly learn and apply new technologies as one of my core strengths. In every project, I make it a point to go beyond comfort zones β whether that means picking up a new cloud service, experimenting with orchestration frameworks, or rethinking how a pipeline should be structured for scale.
|
| 125 |
+
|
| 126 |
+
At Labelmaster, for example, I joined a project midstream where the data infrastructure and model deployment processes were still evolving. I quickly familiarized myself with their tech stack, optimized workflows for large datasets, and contributed to refining model performance and insight delivery timelines. That experience reinforced my ability to learn systems from the inside out, align them with business goals, and deliver measurable impact under tight deadlines.
|
| 127 |
+
|
| 128 |
+
As a Data Scientist, I take pride in bridging experimentation and production. I adapt to whatever tools best serve the problem β from designing model pipelines in AWS to integrating explainability and monitoring layers. This adaptability has consistently allowed me to convert complex data challenges into clear, actionable results that move projects forward.
|
app.py
ADDED
|
@@ -0,0 +1,587 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import streamlit as st
|
| 2 |
+
from openai import OpenAI
|
| 3 |
+
import os
|
| 4 |
+
from dotenv import load_dotenv
|
| 5 |
+
from datetime import datetime
|
| 6 |
+
import pytz
|
| 7 |
+
from reportlab.lib.pagesizes import letter
|
| 8 |
+
from reportlab.lib.styles import getSampleStyleSheet, ParagraphStyle
|
| 9 |
+
from reportlab.lib.units import inch
|
| 10 |
+
from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer, PageBreak
|
| 11 |
+
from reportlab.lib.enums import TA_LEFT, TA_JUSTIFY
|
| 12 |
+
import io
|
| 13 |
+
|
| 14 |
+
# Load environment variables
|
| 15 |
+
load_dotenv()
|
| 16 |
+
|
| 17 |
+
# Page configuration
|
| 18 |
+
st.set_page_config(page_title="AI Resume Assistant", layout="wide")
|
| 19 |
+
st.title("π€ AI Resume Assistant")
|
| 20 |
+
|
| 21 |
+
# Load API keys from environment variables
|
| 22 |
+
openrouter_api_key = os.getenv("OPENROUTER_API_KEY")
|
| 23 |
+
openai_api_key = os.getenv("OPENAI_API_KEY")
|
| 24 |
+
|
| 25 |
+
# Check if API keys are available
|
| 26 |
+
if not openrouter_api_key or not openai_api_key:
|
| 27 |
+
st.error("β API keys not found. Please set OPENROUTER_API_KEY and OPENAI_API_KEY in your environment variables (.env file).")
|
| 28 |
+
st.stop()
|
| 29 |
+
|
| 30 |
+
def get_est_timestamp():
|
| 31 |
+
"""Get current timestamp in EST timezone with format dd-mm-yyyy-HH-MM"""
|
| 32 |
+
est = pytz.timezone('US/Eastern')
|
| 33 |
+
now = datetime.now(est)
|
| 34 |
+
return now.strftime("%d-%m-%Y-%H-%M")
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
def generate_pdf(content, filename):
|
| 38 |
+
"""Generate PDF from content and return as bytes"""
|
| 39 |
+
try:
|
| 40 |
+
pdf_buffer = io.BytesIO()
|
| 41 |
+
doc = SimpleDocTemplate(
|
| 42 |
+
pdf_buffer,
|
| 43 |
+
pagesize=letter,
|
| 44 |
+
rightMargin=0.75*inch,
|
| 45 |
+
leftMargin=0.75*inch,
|
| 46 |
+
topMargin=0.75*inch,
|
| 47 |
+
bottomMargin=0.75*inch
|
| 48 |
+
)
|
| 49 |
+
|
| 50 |
+
story = []
|
| 51 |
+
styles = getSampleStyleSheet()
|
| 52 |
+
|
| 53 |
+
# Custom style for body text
|
| 54 |
+
body_style = ParagraphStyle(
|
| 55 |
+
'CustomBody',
|
| 56 |
+
parent=styles['Normal'],
|
| 57 |
+
fontSize=11,
|
| 58 |
+
leading=14,
|
| 59 |
+
alignment=TA_JUSTIFY,
|
| 60 |
+
spaceAfter=12
|
| 61 |
+
)
|
| 62 |
+
|
| 63 |
+
# Add content only (no preamble)
|
| 64 |
+
# Split content into paragraphs for better formatting
|
| 65 |
+
paragraphs = content.split('\n\n')
|
| 66 |
+
for para in paragraphs:
|
| 67 |
+
if para.strip():
|
| 68 |
+
# Replace line breaks with spaces within paragraphs
|
| 69 |
+
clean_para = para.replace('\n', ' ').strip()
|
| 70 |
+
story.append(Paragraph(clean_para, body_style))
|
| 71 |
+
|
| 72 |
+
# Build PDF
|
| 73 |
+
doc.build(story)
|
| 74 |
+
pdf_buffer.seek(0)
|
| 75 |
+
return pdf_buffer.getvalue()
|
| 76 |
+
|
| 77 |
+
except Exception as e:
|
| 78 |
+
st.error(f"Error generating PDF: {str(e)}")
|
| 79 |
+
return None
|
| 80 |
+
|
| 81 |
+
|
| 82 |
+
def categorize_input(resume_finder, cover_letter, select_resume, entry_query):
|
| 83 |
+
"""
|
| 84 |
+
Categorize input into one of 4 groups:
|
| 85 |
+
- resume_finder: T, F, No Select
|
| 86 |
+
- cover_letter: F, T, not No Select
|
| 87 |
+
- general_query: F, F, not No Select
|
| 88 |
+
- retry: any other combination
|
| 89 |
+
"""
|
| 90 |
+
|
| 91 |
+
if resume_finder and not cover_letter and select_resume == "No Select":
|
| 92 |
+
return "resume_finder", None
|
| 93 |
+
|
| 94 |
+
elif not resume_finder and cover_letter and select_resume != "No Select":
|
| 95 |
+
return "cover_letter", None
|
| 96 |
+
|
| 97 |
+
elif not resume_finder and not cover_letter and select_resume != "No Select":
|
| 98 |
+
if not entry_query.strip():
|
| 99 |
+
return "retry", "Please enter a query for General Query mode."
|
| 100 |
+
return "general_query", None
|
| 101 |
+
|
| 102 |
+
else:
|
| 103 |
+
return "retry", "Please check your entries and try again"
|
| 104 |
+
|
| 105 |
+
|
| 106 |
+
def load_portfolio(file_path):
|
| 107 |
+
"""Load portfolio markdown file"""
|
| 108 |
+
try:
|
| 109 |
+
full_path = os.path.join(os.path.dirname(__file__), file_path)
|
| 110 |
+
with open(full_path, 'r', encoding='utf-8') as f:
|
| 111 |
+
return f.read()
|
| 112 |
+
except FileNotFoundError:
|
| 113 |
+
st.error(f"Portfolio file {file_path} not found!")
|
| 114 |
+
return None
|
| 115 |
+
|
| 116 |
+
|
| 117 |
+
def handle_resume_finder(job_description, ai_portfolio, ds_portfolio, api_key):
|
| 118 |
+
"""Handle Resume Finder category using OpenRouter"""
|
| 119 |
+
|
| 120 |
+
prompt = f"""You are an expert resume matcher. Analyze the following job description and two portfolios to determine which is the best match.
|
| 121 |
+
|
| 122 |
+
IMPORTANT MAPPING:
|
| 123 |
+
- If AI_portfolio is most relevant β Resume = Resume_P
|
| 124 |
+
- If DS_portfolio is most relevant β Resume = Resume_Dss
|
| 125 |
+
|
| 126 |
+
Job Description:
|
| 127 |
+
{job_description}
|
| 128 |
+
|
| 129 |
+
AI_portfolio (Maps to: Resume_P):
|
| 130 |
+
{ai_portfolio}
|
| 131 |
+
|
| 132 |
+
DS_portfolio (Maps to: Resume_Dss):
|
| 133 |
+
{ds_portfolio}
|
| 134 |
+
|
| 135 |
+
Respond ONLY with:
|
| 136 |
+
Resume: [Resume_P or Resume_Dss]
|
| 137 |
+
Reasoning: [25-30 words explaining the match]
|
| 138 |
+
|
| 139 |
+
NO PREAMBLE."""
|
| 140 |
+
|
| 141 |
+
try:
|
| 142 |
+
client = OpenAI(
|
| 143 |
+
base_url="https://openrouter.ai/api/v1",
|
| 144 |
+
api_key=api_key,
|
| 145 |
+
)
|
| 146 |
+
|
| 147 |
+
completion = client.chat.completions.create(
|
| 148 |
+
model="openai/gpt-oss-safeguard-20b",
|
| 149 |
+
messages=[
|
| 150 |
+
{
|
| 151 |
+
"role": "user",
|
| 152 |
+
"content": prompt
|
| 153 |
+
}
|
| 154 |
+
]
|
| 155 |
+
)
|
| 156 |
+
|
| 157 |
+
response = completion.choices[0].message.content
|
| 158 |
+
if response:
|
| 159 |
+
return response
|
| 160 |
+
else:
|
| 161 |
+
st.error("β No response received from OpenRouter API")
|
| 162 |
+
return None
|
| 163 |
+
|
| 164 |
+
except Exception as e:
|
| 165 |
+
st.error(f"β Error calling OpenRouter API: {str(e)}")
|
| 166 |
+
return None
|
| 167 |
+
|
| 168 |
+
|
| 169 |
+
def generate_cover_letter_context(job_description, portfolio, api_key):
|
| 170 |
+
"""Generate company motivation and achievement section using web search via Perplexity Sonar
|
| 171 |
+
|
| 172 |
+
Args:
|
| 173 |
+
job_description: The job posting
|
| 174 |
+
portfolio: Candidate's resume/portfolio
|
| 175 |
+
api_key: OpenRouter API key (used for Perplexity Sonar with web search)
|
| 176 |
+
|
| 177 |
+
Returns:
|
| 178 |
+
dict: {"company_motivation": str, "achievement_section": str}
|
| 179 |
+
"""
|
| 180 |
+
|
| 181 |
+
prompt = f"""You are an expert career strategist. Your task is to generate two specific, personalized inputs for a cover letter.
|
| 182 |
+
|
| 183 |
+
Use your web search capability to find relevant company information. Given the job description and candidate's portfolio, you will:
|
| 184 |
+
1. Search for relevant company information (mission, values, recent projects, culture)
|
| 185 |
+
2. Identify the best achievement from the portfolio that matches the role
|
| 186 |
+
3. Generate targeted content for cover letter generation
|
| 187 |
+
|
| 188 |
+
Job Description:
|
| 189 |
+
{job_description}
|
| 190 |
+
|
| 191 |
+
Candidate's Portfolio:
|
| 192 |
+
{portfolio}
|
| 193 |
+
|
| 194 |
+
Generate a JSON response with exactly this format (no additional text):
|
| 195 |
+
{{
|
| 196 |
+
"company_motivation": "1-2 sentences showing specific interest in THIS company based on their mission/values/recent work. Use your web search to find specific details about the company. Should feel genuine and specific, not generic.",
|
| 197 |
+
"achievement_section": "One specific, quantified achievement from the portfolio that directly supports the job requirements. Format: 'achievement that resulted in specific outcome'."
|
| 198 |
+
}}
|
| 199 |
+
|
| 200 |
+
Requirements for company_motivation:
|
| 201 |
+
- Must reference something specific from the job description OR from web search about the company (company needs, projects, values, recent news)
|
| 202 |
+
- Should show research and genuine interest using real company information
|
| 203 |
+
- 1-2 sentences maximum
|
| 204 |
+
- Sound natural and authentic
|
| 205 |
+
|
| 206 |
+
Requirements for achievement_section:
|
| 207 |
+
- Must be concrete and specific
|
| 208 |
+
- Should include numbers/metrics when possible
|
| 209 |
+
- Must be relevant to the job requirements
|
| 210 |
+
- Maximum 1 sentence
|
| 211 |
+
|
| 212 |
+
Return ONLY the JSON object, no other text."""
|
| 213 |
+
|
| 214 |
+
# Use Perplexity Sonar via OpenRouter (has built-in web search)
|
| 215 |
+
client = OpenAI(
|
| 216 |
+
base_url="https://openrouter.ai/api/v1",
|
| 217 |
+
api_key=api_key,
|
| 218 |
+
)
|
| 219 |
+
|
| 220 |
+
completion = client.chat.completions.create(
|
| 221 |
+
model="perplexity/sonar",
|
| 222 |
+
messages=[
|
| 223 |
+
{
|
| 224 |
+
"role": "user",
|
| 225 |
+
"content": prompt
|
| 226 |
+
}
|
| 227 |
+
]
|
| 228 |
+
)
|
| 229 |
+
|
| 230 |
+
response_text = completion.choices[0].message.content
|
| 231 |
+
|
| 232 |
+
# Parse JSON response
|
| 233 |
+
import json
|
| 234 |
+
try:
|
| 235 |
+
result = json.loads(response_text)
|
| 236 |
+
return {
|
| 237 |
+
"company_motivation": result.get("company_motivation", ""),
|
| 238 |
+
"achievement_section": result.get("achievement_section", "")
|
| 239 |
+
}
|
| 240 |
+
except json.JSONDecodeError:
|
| 241 |
+
# Fallback if JSON parsing fails
|
| 242 |
+
return {
|
| 243 |
+
"company_motivation": "",
|
| 244 |
+
"achievement_section": ""
|
| 245 |
+
}
|
| 246 |
+
|
| 247 |
+
|
| 248 |
+
def handle_cover_letter(job_description, portfolio, api_key, company_motivation="", specific_achievement=""):
|
| 249 |
+
"""Handle Cover Letter category using OpenAI
|
| 250 |
+
|
| 251 |
+
Args:
|
| 252 |
+
job_description: The job posting
|
| 253 |
+
portfolio: Candidate's resume/portfolio
|
| 254 |
+
api_key: OpenAI API key
|
| 255 |
+
company_motivation: Why candidate is interested in THIS company/role (auto-generated if empty)
|
| 256 |
+
specific_achievement: One concrete achievement to leverage (auto-generated if empty)
|
| 257 |
+
"""
|
| 258 |
+
|
| 259 |
+
# Build context about company motivation if provided
|
| 260 |
+
motivation_section = ""
|
| 261 |
+
if company_motivation.strip():
|
| 262 |
+
motivation_section = f"\nCandidate's Interest in This Role:\n{company_motivation}"
|
| 263 |
+
|
| 264 |
+
achievement_section = ""
|
| 265 |
+
if specific_achievement.strip():
|
| 266 |
+
achievement_section = f"\nKey Achievement to Reference:\n{specific_achievement}"
|
| 267 |
+
|
| 268 |
+
prompt = f"""You are an expert career coach writing authentic, human cover letters that stand outβnot generic templates.
|
| 269 |
+
|
| 270 |
+
Your goal: Write a cover letter that feels like it was written by the actual candidate, showing genuine interest and proof of capability.
|
| 271 |
+
|
| 272 |
+
Cover Letter Structure (follow this order):
|
| 273 |
+
1. Opening (2-3 sentences): Hook with specific reason for interest in THIS company, not generic
|
| 274 |
+
2. Middle (4-5 sentences):
|
| 275 |
+
- Show you researched them (reference job description specifics)
|
| 276 |
+
- Connect 1-2 resume achievements directly to their needs
|
| 277 |
+
- Briefly mention the achievement below to prove capability
|
| 278 |
+
3. Closing (1-2 sentences): Express enthusiasm and leave door open
|
| 279 |
+
|
| 280 |
+
Critical Requirements for Authenticity:
|
| 281 |
+
- Write like a real person, NOT a template (varied sentence length, conversational where appropriate)
|
| 282 |
+
- Show personality through word choiceβconfident but humble, professional but warm
|
| 283 |
+
- Every claim must link to either the job description or the achievement below
|
| 284 |
+
- Use specific details from the resume and job posting (shows real attention)
|
| 285 |
+
- NO fluff, NO corporate jargon, NO redundancy
|
| 286 |
+
- If something doesn't connect, don't force it
|
| 287 |
+
- Sound like someone who actually wants this job, not just applying to any job
|
| 288 |
+
- Do NOT mention salary expectations or benefits negotiations
|
| 289 |
+
|
| 290 |
+
Formatting Requirements:
|
| 291 |
+
- Start with "Dear Hiring Manager,"
|
| 292 |
+
- End with: "Best,\nDhanvanth Voona" (Best on one line, name on next line)
|
| 293 |
+
- Maximum 250 words (tight constraint = only include essential points)
|
| 294 |
+
- NO PREAMBLE (begin directly with opening)
|
| 295 |
+
- STRICTLY NO em dashes (use commas or separate sentences instead)
|
| 296 |
+
- Single paragraphs are fine; multiple short paragraphs OK
|
| 297 |
+
|
| 298 |
+
Context for Authentic Writing:
|
| 299 |
+
Resume:
|
| 300 |
+
{portfolio}
|
| 301 |
+
|
| 302 |
+
Job Description:
|
| 303 |
+
{job_description}{motivation_section}{achievement_section}
|
| 304 |
+
|
| 305 |
+
Response (Max 250 words, genuine tone):"""
|
| 306 |
+
|
| 307 |
+
client = OpenAI(api_key=api_key)
|
| 308 |
+
|
| 309 |
+
completion = client.chat.completions.create(
|
| 310 |
+
model="gpt-5-mini-2025-08-07",
|
| 311 |
+
messages=[
|
| 312 |
+
{
|
| 313 |
+
"role": "user",
|
| 314 |
+
"content": prompt
|
| 315 |
+
}
|
| 316 |
+
]
|
| 317 |
+
)
|
| 318 |
+
|
| 319 |
+
response = completion.choices[0].message.content
|
| 320 |
+
return response
|
| 321 |
+
|
| 322 |
+
|
| 323 |
+
def handle_general_query(job_description, portfolio, query, length, api_key):
|
| 324 |
+
"""Handle General Query category using OpenAI"""
|
| 325 |
+
|
| 326 |
+
word_count_map = {
|
| 327 |
+
"short": "40-60",
|
| 328 |
+
"medium": "80-100",
|
| 329 |
+
"long": "120-150"
|
| 330 |
+
}
|
| 331 |
+
|
| 332 |
+
word_count = word_count_map.get(length, "40-60")
|
| 333 |
+
|
| 334 |
+
prompt = f"""You are an expert career consultant helping a candidate answer application questions with authentic, tailored responses.
|
| 335 |
+
|
| 336 |
+
Your task: Answer the query authentically using ONLY genuine connections between the candidate's experience and the job context.
|
| 337 |
+
|
| 338 |
+
Word Count Strategy (Important - Read Carefully):
|
| 339 |
+
- Target: {word_count} words MAXIMUM
|
| 340 |
+
- Adaptive: Use fewer words if the query can be answered completely and convincingly with fewer words
|
| 341 |
+
- Examples: "What is your greatest strength?" might need only 45 words. "Why our company?" needs 85-100 words to show genuine research
|
| 342 |
+
- NEVER force content to hit word count targets - prioritize authentic connection over word count
|
| 343 |
+
|
| 344 |
+
Connection Quality Guidelines:
|
| 345 |
+
- Extract key company values/needs from job description
|
| 346 |
+
- Find 1-2 direct experiences from resume that align with these
|
| 347 |
+
- Show cause-and-effect: "Because you need X, my experience with Y makes me valuable"
|
| 348 |
+
- If connection is weak or forced, acknowledge limitations honestly
|
| 349 |
+
- Avoid generic statements - every sentence should reference either the job, company, or specific experience
|
| 350 |
+
|
| 351 |
+
Requirements:
|
| 352 |
+
- Answer naturally as if written by the candidate
|
| 353 |
+
- Start directly with the answer (NO PREAMBLE or "Let me tell you...")
|
| 354 |
+
- Response must be directly usable in an application
|
| 355 |
+
- Make it engaging and personalized, not templated
|
| 356 |
+
- STRICTLY NO EM DASHES
|
| 357 |
+
- One authentic connection beats three forced ones
|
| 358 |
+
|
| 359 |
+
Resume:
|
| 360 |
+
{portfolio}
|
| 361 |
+
|
| 362 |
+
Job Description:
|
| 363 |
+
{job_description}
|
| 364 |
+
|
| 365 |
+
Query:
|
| 366 |
+
{query}
|
| 367 |
+
|
| 368 |
+
Response (Max {word_count} words, use fewer if appropriate):"""
|
| 369 |
+
|
| 370 |
+
client = OpenAI(api_key=api_key)
|
| 371 |
+
|
| 372 |
+
completion = client.chat.completions.create(
|
| 373 |
+
model="gpt-5-mini-2025-08-07",
|
| 374 |
+
messages=[
|
| 375 |
+
{
|
| 376 |
+
"role": "user",
|
| 377 |
+
"content": prompt
|
| 378 |
+
}
|
| 379 |
+
]
|
| 380 |
+
)
|
| 381 |
+
|
| 382 |
+
response = completion.choices[0].message.content
|
| 383 |
+
return response
|
| 384 |
+
|
| 385 |
+
|
| 386 |
+
# Main input section
|
| 387 |
+
st.header("π Input Form")
|
| 388 |
+
|
| 389 |
+
# Create columns for better layout
|
| 390 |
+
col1, col2 = st.columns(2)
|
| 391 |
+
|
| 392 |
+
with col1:
|
| 393 |
+
job_description = st.text_area(
|
| 394 |
+
"Job Description (Required)*",
|
| 395 |
+
placeholder="Paste the job description here...",
|
| 396 |
+
height=150
|
| 397 |
+
)
|
| 398 |
+
|
| 399 |
+
with col2:
|
| 400 |
+
st.subheader("Options")
|
| 401 |
+
resume_finder = st.checkbox("Resume Finder", value=False)
|
| 402 |
+
cover_letter = st.checkbox("Cover Letter", value=False)
|
| 403 |
+
|
| 404 |
+
# Length of Resume
|
| 405 |
+
length_options = {
|
| 406 |
+
"Short (40-60 words)": "short",
|
| 407 |
+
"Medium (80-100 words)": "medium",
|
| 408 |
+
"Long (120-150 words)": "long"
|
| 409 |
+
}
|
| 410 |
+
length_of_resume = st.selectbox(
|
| 411 |
+
"Length of Resume",
|
| 412 |
+
options=list(length_options.keys()),
|
| 413 |
+
index=0
|
| 414 |
+
)
|
| 415 |
+
length_value = length_options[length_of_resume]
|
| 416 |
+
|
| 417 |
+
# Select Resume dropdown
|
| 418 |
+
resume_options = ["No Select", "Resume_P", "Resume_Dss"]
|
| 419 |
+
select_resume = st.selectbox(
|
| 420 |
+
"Select Resume",
|
| 421 |
+
options=resume_options,
|
| 422 |
+
index=0
|
| 423 |
+
)
|
| 424 |
+
|
| 425 |
+
# Entry Query
|
| 426 |
+
entry_query = st.text_area(
|
| 427 |
+
"Entry Query (Optional)",
|
| 428 |
+
placeholder="Ask any question related to your application...",
|
| 429 |
+
max_chars=5000,
|
| 430 |
+
height=100
|
| 431 |
+
)
|
| 432 |
+
|
| 433 |
+
# Submit button
|
| 434 |
+
if st.button("π Generate", type="primary", use_container_width=True):
|
| 435 |
+
# Validate job description
|
| 436 |
+
if not job_description.strip():
|
| 437 |
+
st.error("β Job Description is required!")
|
| 438 |
+
st.stop()
|
| 439 |
+
|
| 440 |
+
# Categorize input
|
| 441 |
+
category, error_message = categorize_input(
|
| 442 |
+
resume_finder, cover_letter, select_resume, entry_query
|
| 443 |
+
)
|
| 444 |
+
|
| 445 |
+
if category == "retry":
|
| 446 |
+
st.warning(f"β οΈ {error_message}")
|
| 447 |
+
else:
|
| 448 |
+
st.header("π€ Response")
|
| 449 |
+
|
| 450 |
+
# Debug info (can be removed later)
|
| 451 |
+
with st.expander("π Debug Info"):
|
| 452 |
+
st.write(f"**Category:** {category}")
|
| 453 |
+
st.write(f"**Resume Finder:** {resume_finder}")
|
| 454 |
+
st.write(f"**Cover Letter:** {cover_letter}")
|
| 455 |
+
st.write(f"**Select Resume:** {select_resume}")
|
| 456 |
+
st.write(f"**Has Query:** {bool(entry_query.strip())}")
|
| 457 |
+
st.write(f"**OpenAI API Key Set:** {'β
Yes' if openai_api_key else 'β No'}")
|
| 458 |
+
st.write(f"**OpenRouter API Key Set:** {'β
Yes' if openrouter_api_key else 'β No'}")
|
| 459 |
+
st.write(f"**OpenAI Key First 10 chars:** {openai_api_key[:10] + '...' if openai_api_key else 'N/A'}")
|
| 460 |
+
st.write(f"**OpenRouter Key First 10 chars:** {openrouter_api_key[:10] + '...' if openrouter_api_key else 'N/A'}")
|
| 461 |
+
|
| 462 |
+
# Load portfolios
|
| 463 |
+
ai_portfolio = load_portfolio("AI_portfolio.md")
|
| 464 |
+
ds_portfolio = load_portfolio("DS_portfolio.md")
|
| 465 |
+
|
| 466 |
+
if ai_portfolio is None or ds_portfolio is None:
|
| 467 |
+
st.stop()
|
| 468 |
+
|
| 469 |
+
response = None
|
| 470 |
+
error_occurred = None
|
| 471 |
+
|
| 472 |
+
if category == "resume_finder":
|
| 473 |
+
with st.spinner("π Finding the best resume for you..."):
|
| 474 |
+
try:
|
| 475 |
+
response = handle_resume_finder(
|
| 476 |
+
job_description, ai_portfolio, ds_portfolio, openrouter_api_key
|
| 477 |
+
)
|
| 478 |
+
except Exception as e:
|
| 479 |
+
error_occurred = f"Resume Finder Error: {str(e)}"
|
| 480 |
+
|
| 481 |
+
elif category == "cover_letter":
|
| 482 |
+
selected_portfolio = ai_portfolio if select_resume == "Resume_P" else ds_portfolio
|
| 483 |
+
|
| 484 |
+
# Generate company motivation and achievement section
|
| 485 |
+
st.info("π Analyzing company and generating personalized context with web search...")
|
| 486 |
+
context_placeholder = st.empty()
|
| 487 |
+
|
| 488 |
+
try:
|
| 489 |
+
context_placeholder.info("π Generating company motivation and achievement section (with web search)...")
|
| 490 |
+
context = generate_cover_letter_context(job_description, selected_portfolio, openrouter_api_key)
|
| 491 |
+
company_motivation = context.get("company_motivation", "")
|
| 492 |
+
specific_achievement = context.get("achievement_section", "")
|
| 493 |
+
context_placeholder.success("β
Context generated successfully with web search!")
|
| 494 |
+
except Exception as e:
|
| 495 |
+
error_occurred = f"Context Generation Error: {str(e)}"
|
| 496 |
+
context_placeholder.error(f"β Failed to generate context: {str(e)}")
|
| 497 |
+
st.info("π‘ Proceeding with cover letter generation without auto-generated context...")
|
| 498 |
+
company_motivation = ""
|
| 499 |
+
specific_achievement = ""
|
| 500 |
+
|
| 501 |
+
# Now generate the cover letter
|
| 502 |
+
with st.spinner("βοΈ Generating your cover letter..."):
|
| 503 |
+
try:
|
| 504 |
+
response = handle_cover_letter(
|
| 505 |
+
job_description, selected_portfolio, openai_api_key,
|
| 506 |
+
company_motivation=company_motivation,
|
| 507 |
+
specific_achievement=specific_achievement
|
| 508 |
+
)
|
| 509 |
+
except Exception as e:
|
| 510 |
+
error_occurred = f"Cover Letter Error: {str(e)}"
|
| 511 |
+
|
| 512 |
+
elif category == "general_query":
|
| 513 |
+
selected_portfolio = ai_portfolio if select_resume == "Resume_P" else ds_portfolio
|
| 514 |
+
with st.spinner("π Crafting your response..."):
|
| 515 |
+
try:
|
| 516 |
+
response = handle_general_query(
|
| 517 |
+
job_description, selected_portfolio, entry_query,
|
| 518 |
+
length_value, openai_api_key
|
| 519 |
+
)
|
| 520 |
+
except Exception as e:
|
| 521 |
+
error_occurred = f"General Query Error: {str(e)}"
|
| 522 |
+
|
| 523 |
+
# Display error if one occurred
|
| 524 |
+
if error_occurred:
|
| 525 |
+
st.error(f"β {error_occurred}")
|
| 526 |
+
st.info("π‘ **Troubleshooting Tips:**\n- Check your API keys in the .env file\n- Verify your API key has sufficient credits/permissions\n- Ensure the model name is correct for your API tier")
|
| 527 |
+
|
| 528 |
+
# Store response in session state only if new response generated
|
| 529 |
+
if response:
|
| 530 |
+
st.session_state.edited_response = response
|
| 531 |
+
st.session_state.editing = False
|
| 532 |
+
elif not error_occurred:
|
| 533 |
+
st.error("β Failed to generate response. Please check the error messages above and try again.")
|
| 534 |
+
|
| 535 |
+
# Display stored response if available (persists across button clicks)
|
| 536 |
+
if "edited_response" in st.session_state and st.session_state.edited_response:
|
| 537 |
+
st.header("π€ Response")
|
| 538 |
+
|
| 539 |
+
# Toggle edit mode
|
| 540 |
+
col_response, col_buttons = st.columns([3, 1])
|
| 541 |
+
|
| 542 |
+
with col_buttons:
|
| 543 |
+
if st.button("βοΈ Edit", key="edit_btn", use_container_width=True):
|
| 544 |
+
st.session_state.editing = not st.session_state.editing
|
| 545 |
+
|
| 546 |
+
# Display response or edit area
|
| 547 |
+
if st.session_state.editing:
|
| 548 |
+
st.session_state.edited_response = st.text_area(
|
| 549 |
+
"Edit your response:",
|
| 550 |
+
value=st.session_state.edited_response,
|
| 551 |
+
height=250,
|
| 552 |
+
key="response_editor"
|
| 553 |
+
)
|
| 554 |
+
|
| 555 |
+
col_save, col_cancel = st.columns(2)
|
| 556 |
+
with col_save:
|
| 557 |
+
if st.button("πΎ Save Changes", use_container_width=True):
|
| 558 |
+
st.session_state.editing = False
|
| 559 |
+
st.success("β
Response updated!")
|
| 560 |
+
st.rerun()
|
| 561 |
+
|
| 562 |
+
with col_cancel:
|
| 563 |
+
if st.button("β Cancel", use_container_width=True):
|
| 564 |
+
st.session_state.editing = False
|
| 565 |
+
st.rerun()
|
| 566 |
+
else:
|
| 567 |
+
# Display the response
|
| 568 |
+
st.success(st.session_state.edited_response)
|
| 569 |
+
|
| 570 |
+
# Download PDF button
|
| 571 |
+
timestamp = get_est_timestamp()
|
| 572 |
+
pdf_filename = f"Dhanvanth_{timestamp}.pdf"
|
| 573 |
+
|
| 574 |
+
pdf_content = generate_pdf(st.session_state.edited_response, pdf_filename)
|
| 575 |
+
if pdf_content:
|
| 576 |
+
st.download_button(
|
| 577 |
+
label="π₯ Download as PDF",
|
| 578 |
+
data=pdf_content,
|
| 579 |
+
file_name=pdf_filename,
|
| 580 |
+
mime="application/pdf",
|
| 581 |
+
use_container_width=True
|
| 582 |
+
)
|
| 583 |
+
|
| 584 |
+
st.markdown("---")
|
| 585 |
+
st.markdown(
|
| 586 |
+
"Say Hi to Griva thalli from her mama β€οΈ"
|
| 587 |
+
)
|