After learning most of the data science libraries like numpy, pandas and data visualization techniques such as matplotlib and seaborn, and basic python datastructures such as list, tuple, dictionary and set. and many learnings from ML specialization from Andrew NG's Course from DeepLearning.AI I was searching about any internship with which I can implement my skills. Yesterday on 31st of Jan I have given an expressive hackathon by Innomatics Research Labs got to know that this all knowledge was also not upto mark for the questions asked by them. They given me 3 data sets with formates like json, csv, sql. json and csv files are easy to import and handle, but with pandas I struggled with sql, but I used the great approch to make a data frame out of sql file also, check out my git hub repo to learn 'how I imported sql file in different way!' , i bet you will get surprised. https://lnkd.in/dDgywy2Q And after all, as you have read my post, I have gift for you all, I have learnt all data structures and pandas library with making notes out of it, I have given link of my colab files for same, you will get to learn many things out of it also. for Pandas: https://lnkd.in/dXT_N3JW for list, tuple , dict, set: https://lnkd.in/dsdyfSeE I have added some practice question in data structures for the practice, waiting for how you like it, comment if you like the technique and if you find it unique. #Innomatics Research Labs #pandas #deeplearning.ai #python #datastructures #hackathon
Implementing Data Science Skills in Python Internship
More Relevant Posts
-
🚀 Machine Learning Project | California Housing Price Prediction 🏠📈 As part of my Machine Learning learning journey, I built a California Housing Price Prediction model while learning through courses and hands-on practice with GeeksforGeeks. This project gave me strong exposure to a complete end-to-end machine learning workflow, from raw data to deployment-ready models. 🔍 Project Highlights: ✔ Data acquisition & exploratory data analysis (EDA) ✔ Feature scaling using Standard Scaler ✔ Regression modeling using Linear Regression ✔ Model evaluation using MSE, MAE, R² & Adjusted R² ✔ Regularization techniques with Ridge & Lasso ✔ Model persistence using Pickle for future use 🧠 What I learned: • Importance of data preprocessing & scaling • How regularization helps prevent overfitting • Evaluating regression models beyond just accuracy • Building ML models ready for real-world deployment 🛠 Tech Stack: Python | Pandas | NumPy | Matplotlib | Seaborn | scikit-learn | Google Colab 📌 This project strengthened my fundamentals in regression, model optimization, and ML best practices, and marks another step forward in my Machine Learning journey. 🔗 Project Notebook (Google Colab): https://lnkd.in/gReBqDt3 Open to feedback, suggestions, and learning discussions! #MachineLearning #DataScience #Python #MLProjects #Regression #LearningByDoing #GeeksforGeeks #StudentDeveloper
To view or add a comment, sign in
-
🚀 DeepSeek just raised the bar in OCR and I explored it hands-on with Python. DeepSeek has released DeepSeek-OCR 2 (3B) 🐋 a new state-of-the-art model for visual, document, and OCR understanding. I built a full fine-tuning & inference notebook to test it in practice, and the results are 🔥 🔍 What’s new in DeepSeek-OCR 2? Unlike traditional vision LLMs that read images in a rigid grid (top-left → bottom-right), DeepSeek introduces: DeepEncoder V2 A human-like visual scanning mechanism that: Builds a global understanding of the page Then learns what to read first, next, and why 💡 Why this matters This new reading strategy dramatically improves performance on: 📄 Complex documents 📊 Tables & forms 🧾 Multi-column layouts 🔗 Label–value pairs 🧠 Mixed text + structure 📈 Performance highlights Outperforms Gemini 3 Pro on OCR benchmarks +4% improvement over the previous DeepSeek-OCR Strong gains on real-world scanned documents . 🔥 #DeepSeek #OCR #ComputerVision #DocumentAI #LLM #VisionAI #Python #AIEngineering #DataScience #RAG #AI https://lnkd.in/dtDjMYWQ
To view or add a comment, sign in
-
Learn programming, not prompting, to be skilled for whatever the future is. AI as a product attribute isn't exceptional anymore; it's to be expected. A tool that's "AI-powered" isn't compelling to consumers anymore than one that's "cloud-based" or "mobile-friendly". The difference maker today is the programming --the code around the function that calls the language model. Real AI literacy comes when you're closer to code. Learn scripting with Bash or Python with Google Colab. https://lnkd.in/gG-H47nP
To view or add a comment, sign in
-
Over the past period, I’ve been deepening my understanding of Python fundamentals and Object-Oriented Programming concepts through hands-on practice and real problem solving. Some of the key topics I’ve worked on include: Understanding the difference between shallow copy and deep copy when dealing with objects inside lists Using lambda functions for custom sorting based on object attributes Implementing flexible class designs using **kwargs to handle dynamic data Dynamically adding attributes to objects using setattr() Accessing and displaying object attributes using self.dict.items() Applying Encapsulation by creating private attributes (e.g., __balance) inside classes Building practical class-based systems like a BankAccount with deposit, withdraw, and get_balance methods while handling edge cases such as negative withdrawals Writing controlled loops using continue and break for condition-based execution These concepts helped me better understand how to design scalable and flexible class structures that can work with dynamic inputs such as JSON data or API responses — which is essential for real-world applications in automation, data processing, and backend systems. Project Link: https://lnkd.in/dkFY4uuU Special thanks to my Instructor Waled Saied, and my Mentor Iyad Mahdy , for their continuous support and guidance throughout this learning journey. Looking forward to applying these concepts in more advanced projects. #Python #OOP #SoftwareDevelopment #ProblemSolving #Programming #ComputerScience
To view or add a comment, sign in
-
🚀 Excited to share my latest Data Analysis & Machine Learning project using Python and Google Colab! In this project, I worked on analyzing food delivery order data to extract meaningful insights and build predictive models. ✅ Data Cleaning & Duplicate Handling Uploaded Excel dataset into Google Colab Detected and removed duplicate records Structured and prepared data for analysis 📊 Data Analysis & Visualization Restaurant-wise delivery success rate analysis Food item performance evaluation Monthly delivery success trend analysis Payment method vs delivery outcome insights Identified worst-performing food items and restaurants automatically 🤖 Machine Learning Implementation Converted delivery status into numerical labels Feature encoding using pandas Built a Random Forest classification model to predict delivery success/failure Evaluated model performance using train-test split 📈 Tools & Technologies Used: Python | Pandas | Matplotlib | Scikit-learn | Google Colab This project helped me strengthen my skills in data preprocessing, exploratory data analysis (EDA), visualization, and basic machine learning workflows. Project Notebook: https://lnkd.in/dUqeDVeg I’m continuously learning and exploring AI/ML and Data Analytics. Feedback and suggestions are always welcome! #DataAnalytics #Python #MachineLearning #DataScience #GoogleColab #LearningJourney #AI #BeginnerProject
To view or add a comment, sign in
-
Understanding Python data structures is not just about learning syntax — it’s about knowing when to use each one effectively. In my latest blog, I explore the practical differences between: Lists (flexible and ordered) Tuples (fixed and secure) Dictionaries (fast key-value access) Sets (unique and efficient) I also included: ✅ A clear comparison table ✅ Real-world examples (shopping cart, student system, email registration) ✅ A mini user management system combining multiple data structures ✅ A simple decision guide for beginners This topic helped me better understand how choosing the right data structure improves performance, readability, and overall code quality. Python Data Structures Guide Link: https://lnkd.in/gdA9Uuk9 Grateful to Innomatics Research Labs for encouraging practical and structured learning through hands-on assignments like this one. Programming is not just about writing code — it’s about choosing the right tool for the job. 💡 #Python #DataStructures #InnomaticsResearchLabs #LearnToCode #PythonProgramming #SoftwareDevelopment #CodingJourney #TechSkills #DeveloperGrowth #Programming
To view or add a comment, sign in
-
🚀 Built a YouTube Q&A system using RAG (LangChain + OpenAI) Created a small project where you can upload a YouTube video, ask any question related to its content, and get accurate answers or summaries using a complete RAG pipeline. 🔹 Extracted and processed transcripts 🔹 Generated embeddings and stored them in a vector database (FAISS) 🔹 Retrieved relevant context and generated responses using an LLM 🔹 Built the entire flow using LangChain Runnables 🛠 Tech Stack: Python, LangChain, OpenAI, FAISS, Vector Search 📎 Google Colab (code & implementation): https://lnkd.in/gAS7NNYi Actively learning and building in Generative AI & LLM-based systems 🚀 #RAG #LangChain #GenerativeAI #LLM #AIProjects
To view or add a comment, sign in
-
Day 5: From prompts to production-style AI workflow Today I built a book intelligence tool in Python that: - pulls live metadata from the Google Books API (title, authors, publishedDate, description) - uses a strict system prompt + output contract for consistent formatting - supports dual model backends: - gpt-4o-mini (via OpenRouter/OpenAI SDK pattern) - llama3.2 locally via Ollama - includes a reusable summarization pipeline (lookup -> prompt build -> model call -> structured response) - supports streaming responses in Jupyter with live token updates Technical focus areas I implemented: - prompt constraints to reduce hallucination risk (use only provided description) - deterministic generation (temperature=0) for stable output quality - backend abstraction so the same function can switch between cloud and local LLMs - graceful fallback when source metadata is incomplete This is the direction I’m doubling down on: LLM architecture, interoperability, reliability, and real user-facing utility. Explore the full Day 5 build here (code + outputs): https://lnkd.in/deueEekK Feedback and ideas for the next iteration are very welcome. Ed Donner #AIEngineering #LLM #GenerativeAI #OpenAI #Ollama #Python #MLOps #BlueOceanStrategy #BuildInPublic #Tech
To view or add a comment, sign in
-
I’m excited to share a recent Object-Oriented Programming (OOP) project , I built using Python — a simple School Management System developed to apply core OOP concepts in a practical, real-world scenario. In this project, I applied: • **Abstraction** by creating an abstract base class `Person` with a defined method `get_role()` for all subclasses. • **Inheritance** by extending `Person` into `Student` and `Teacher` classes to reuse shared attributes. • **Encapsulation** by making the student’s grade a private attribute and accessing it through getter methods. • **Polymorphism** by implementing a unified function to interact with different object types based on their role. • **Copy Behavior** by demonstrating the difference between shallow copy and deep copy when working with object lists. I also built a simple CLI-based interface to dynamically add and manage school members. This project strengthened my understanding of how OOP principles enhance code structure, reusability, and data protection when modeling real-life systems. Project Repository: https://lnkd.in/dXiRpcs3 Special thanks to my Instructor Waled Saied and my Mentor Iyad Mahdy at Instant Software Solutions for their guidance and continuous support. #Python #OOP #SoftwareDevelopment #ComputerScience #LearningJourney
To view or add a comment, sign in
-
Just wrapped up an exciting data science project focused on predicting Premier League football match outcomes! ⚽️ This journey took me from in-depth Exploratory Data Analysis (EDA) to building and optimizing Machine Learning models like Logistic Regression, Decision Trees, and SVMs. Key takeaways: EDA revealed fascinating insights: From home bias trends to the distribution of goals, understanding the data was crucial. Model Performance: Optimized Logistic Regression and SVM showed promising accuracy in predicting results, even with the inherent unpredictability of football. Feature Importance: Uncovered which half-time stats truly drive full-time results. The Power of Domain Expertise: Realized how much more potential lies in incorporating deep football knowledge (e.g., team form, injuries, tactics) into feature engineering. This project was a fantastic dive into applying ML to sports data, highlighting both the power and limitations of current models. Always learning! DataScience https://lnkd.in/gGwbFeDY #MachineLearning #FootballAnalytics #PremierLeague #Python #PredictiveModeling #SportsTech #EDA #LemonK #minkhant
To view or add a comment, sign in