Sign in to view Jules’ full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Jules’ full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Fremont, California, United States
Sign in to view Jules’ full profile
Jules can introduce you to 10+ people at Anyscale
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
8K followers
500+ connections
Sign in to view Jules’ full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Jules
Jules can introduce you to 10+ people at Anyscale
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with Jules
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view Jules’ full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Articles by Jules
-
SQL Scripting in Upcoming Apache Spark 4.0
SQL Scripting in Upcoming Apache Spark 4.0
The upcoming Apache Spark™ 4.0 introduces a new feature for SQL developers and data engineers: SQL Scripting.
88
3 Comments -
Does the SQL Language Need a Pipe (|) Operator?May 11, 2025
Does the SQL Language Need a Pipe (|) Operator?
SQL remains a steadfastly declarative language. Apache Spark™ DataFrame API transformations draw its declarative…
15
-
How to Process IoT Device JSON Data Using Spark Datasets and DataFrames - Part 2Mar 21, 2016
How to Process IoT Device JSON Data Using Spark Datasets and DataFrames - Part 2
Pursuing simplicity and ubiquity "Spark is a developer's delight" is a common refrain heard among Spark's developer…
35
2 Comments -
Publish & Subscribe Data Pipeline with Confluent 2.0/Kafka 0.9 & InfluxDBDec 21, 2015
Publish & Subscribe Data Pipeline with Confluent 2.0/Kafka 0.9 & InfluxDB
The second part of this blog has been published. The idea is simple: Follow the Rule of Three to simulate IoT device…
8
-
PubNub Integration with Apache Spark and InfluxDB: A simulation of IoT Device ConnectivityDec 8, 2015
PubNub Integration with Apache Spark and InfluxDB: A simulation of IoT Device Connectivity
Thou Shall Publish..
11
-
A Gopher Joins the Kingdom of Languages, with Ums, Ahs, Uhh...Oct 23, 2015
A Gopher Joins the Kingdom of Languages, with Ums, Ahs, Uhh...
A set of learning exercises for G0 (or any language for that matter) is an excellent way to interactively and quickly…
3
-
Apache Spark on Hadoop: Learn, Try and DoJun 15, 2015
Apache Spark on Hadoop: Learn, Try and Do
Not a day passes when someone tweets or re-tweets a blog on the virtues of Apache Spark. Not a week passes when an…
20
Activity
8K followers
-
Jules Damji shared thisBuilding agents is easy. Keeping them reliable in production? That's the real challenge. MLflow ships with built-in tracing and observability for AI agents. We're sharing aspects of MLflow Observability into practice at these Agentic + AI Observability Meetups Free meetup, limited spots 👇 https://lnkd.in/g4fKKdrz
-
Jules Damji shared thisGreat to see a successful and scaleable use case using Kubernetes & MLflow as part of LY Corporation AI platform.Jules Damji shared thisLY Corporation (one of Japan’s leading tech giants) successfully integrated MLflow as a core pillar of their internal AI platform using Kubernetes. 🙌 Their approach provides a blueprint for scaling MLOps infrastructure in high-traffic, high-security environments: 🔐 𝗡𝗼𝗻-𝗶𝗻𝘃𝗮𝘀𝗶𝘃𝗲 𝗮𝘂𝘁𝗵: To keep the MLflow OSS core clean, they implemented security via an Authorization Proxy sidecar in Kubernetes. This allows for seamless version upgrades without re-coding custom features. 📡 𝗠𝗮𝗰𝗵𝗶𝗻𝗲-𝘁𝗼-𝗺𝗮𝗰𝗵𝗶𝗻𝗲 𝗮𝘂𝘁𝗵𝗼𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻 & 𝗮𝘂𝘁𝗵𝗲𝗻𝘁𝗶𝗰𝗮𝘁𝗶𝗼𝗻: Training pods use mTLS-based authentication (SPIFFE) to automatically obtain access tokens. Data scientists can focus on models while the platform handles identity transparently. ⚖️ 𝗦𝗰𝗮𝗹𝗮𝗯𝗹𝗲 𝗶𝘀𝗼𝗹𝗮𝘁𝗶𝗼𝗻: By providing dedicated MLflow instances for each service on Kubernetes, they ensure stability and strict RBAC without the inefficiency of "shadow IT." As ML usage moves from experimental "toy projects" to business-critical infrastructure, the "integration tax" of security can't be an afterthought. LY Corp proves that a "Golden Path" approach makes high-security MLOps sustainable with MLflow. 🚀 🔗 𝗥𝗲𝗮𝗱 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝘁𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗱𝗲𝗲𝗽 𝗱𝗶𝘃𝗲 𝗵𝗲𝗿𝗲: https://lnkd.in/eQfGDaRN #MLflow #GenAI #LLMOps #AIAgents #Kubernetes
-
Jules Damji reposted thisJules Damji reposted this🤖 Your chatbot answers are perfect, so why are users still leaving? Accuracy and grounding are great, but single-turn metrics won’t catch when an agent loses context or fails to resolve a goal over a long conversation. MLflow 3.10 bridges this gap with new research-backed tools for Multi-turn Evaluation & Simulation: 🔹 𝗖𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻 𝗦𝗶𝗺𝘂𝗹𝗮𝘁𝗶𝗼𝗻: Stress-test agents with LLM "users" following specific personas 🔹 𝗦𝗲𝘀𝘀𝗶𝗼𝗻-𝗟𝗲𝘃𝗲𝗹 𝗦𝗰𝗼𝗿𝗲𝗿𝘀: New built-in metrics like ConversationCompleteness and UserFrustration 🔹 𝗧𝗿𝗮𝗰𝗲-𝗕𝗮𝘀𝗲𝗱 𝗧𝗲𝘀𝘁 𝗖𝗮𝘀𝗲𝘀: Automatically turn real traced conversations into synthetic test scenarios 🔹 𝗦𝗶𝗱𝗲-𝗯𝘆-𝗦𝗶𝗱𝗲 𝗖𝗼𝗺𝗽𝗮𝗿𝗶𝘀𝗼𝗻: See exactly how prompt changes impact long-term user satisfaction in the UI Want to see the code and research behind these new tools? 👇 🔗 Check out the full blog post: https://lnkd.in/esQVjHJk #MLflow #LLM #Chatbots #GenerativeAI #Evaluation #AIObservability
-
Jules Damji shared this🔥 Most teams building agents today have no idea what's happening inside them once they're deployed. That's a problem. As agents get more autonomous, observability isn't optional. It's the difference between a demo and a production system. We are co-organizing a meeting to tackle this head-on. Join us at the Agentic + AI Observability Meetup SF on April 9th at Databricks HQ. Two stellar talks, zero fluff. Here's the lineup: 🏗️ "Building Governed Agents with Databricks" by Sunish Sheth (Senior SWE, Databricks). What governance actually looks like when agents operate autonomously. 🤖 "From Primitives to Production: How Anthropic Builds Agents" by Isabella He (MTS, Anthropic). The real story behind building Claude Code agents, from code execution to MCP servers to production iteration. 🍕 Networking with refreshments from 7:15 PM onward. Doors open at 5:00 PM. 📍 Databricks Inc., San Francisco 📅 Thursday, April 9, 2026 | 5:00 PM - 8:00 PM PST 🎟️ Free admission If you care about structured traces, AI evals, or shipping agents that don't quietly break in production, this one is for you. Spots will fill up. Grab yours here 👇 https://lnkd.in/g4fKKdrz #AIObservability #LLMOps #MLflow #AIOps #AIEvals ��
-
Jules Damji shared thisIn our latest blog on the lakebase category, we explore the ways that agents are changing software development, and by extension, databases. Agents rapidly spin up, use, and discard databases, with the average life of agent-created databases at less than 10 seconds. They also prefer open source tools like Postgres, because that's what they're trained on. This is why the lakebase architecture is so powerful for agents: instant branching at near-zero cost, true scale-to-zero elasticity, and open Postgres storage. Read the full blog for more! #AI #AgenticAI #Databricks #Lakebase #Postgres #CloudDatabaseHow agentic software development will change databasesHow agentic software development will change databases
-
Jules Damji shared thisAnd the series continue with its fourth episode just released. Take a listen 👂 This is part 2 of tracing; part 1 was episode 3. Tracing in a foundational corner stone of any GenAI application for AI Observability. If you can’t see what your agent is doing at each step, spanning LLM calls, tool invocations, retrieval steps, and latency at every node, then evaluation has nothing to work with. Tracing gives you that visibility, and it changes the conversation from “I think the agent is working” (guessing) to “here’s exactly what happened on this request” (measuring).Jules Damji shared thisAuto-logging is great for standard LLM calls, but production-grade GenAI usually involves custom logic, internal APIs, tools usage, and complex agentic loops that standard trackers can't see. In our latest tutorial (Notebook 1.4), we move beyond the defaults to show you how to instrument your code manually using the @𝚖𝚕𝚏𝚕𝚘𝚠.𝚝𝚛𝚊𝚌𝚎 decorator. The technical breakdown: 🔹 𝗖𝘂𝘀𝘁𝗼𝗺 𝗦𝗽𝗮𝗻 𝗧𝘆𝗽𝗲𝘀: Categorize operations (Agent, Retriever, Tool, Parser, Chat, etc.) for granular metrics in the UI. 🔹 𝗛𝗶𝗲𝗿𝗮𝗿𝗰𝗵𝗶𝗰𝗮𝗹 𝗥𝗔𝗚 𝗧𝗿𝗮𝗰𝗶𝗻𝗴: Track the full lifecycle from query embedding to document retrieval. 🔹 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗢𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Capture tool selection and parameter usage in autonomous workflows. 🔹 𝗥𝗼𝗼𝘁-𝗖𝗮𝘂𝘀𝗲 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀: Use the MLflow Assistant to identify and fix schema errors in custom traces. 🎥 𝗪𝗮𝘁𝗰𝗵 𝘁𝗵𝗲 𝘁𝘂𝘁𝗼𝗿𝗶𝗮𝗹: https://lnkd.in/eDFrnB99 🔗 𝗚𝗲𝘁 𝘁𝗵𝗲 𝗰𝗼𝗱𝗲: https://lnkd.in/gyiKNT2S #MLflow #GenAI #LLMOps #RAG #AIAgents
-
Jules Damji shared thisSecure your spots for the upcoming ones as we have a great line up of speakers from companies at the fore front of Agentic systems. Sieze the moment!Jules Damji shared thisWe had a great Agentic + AI Observability Night at Terra Gallery last week! Here are some highlights photos from the event! And yes, we will be posting the videos and slides from the event shortly. Meanwhile, I would like to thank Aparna Dhinakaran (Arize AI), David G. (Factory), Greg Pstrucha (Sentry), Lei Xu (LanceDB), Marius Buleandra (Anthropic) and Oleksandra Bovkun (Databricks) for their thoughtful technical presentations about the importance of #ai #agentic #evaluations. And thanks to Jules Damji and Maria Pere-Perez for MC'ing the event (with a special call out to Maria for keeping things on pace and enjoyable)! And of course thanks to all the volunteers who helped run the event at the wonderful Terra Gallery led by Yufa Li and the amazing Elizabeth Sapiro Santor (which this event could not have happened without). Thanks again to Sarah Gonzalez (https://lnkd.in/gzamkc4h) for the amazing photos - more to come!
-
Jules Damji shared thisA worthy use case validation from Rahul Pandey on how MLflow helps track all the LoRA hyperparameters while fine tuning your model. Take a read!Jules Damji shared thisExcited to share something I've been building: auto-finetune 😇 Getting an LLM to truly speak your language requires two things: the right system prompt, and weights that are adapted to your task. auto-finetune does both — automatically. You describe your use case in plain English. Upload your input/output examples. The system synthesizes a custom system prompt via LLM, then autonomously fine-tunes a small model of your choice using LoRA — searching for the best hyperparameters by training, evaluating, and iterating. All without you touching a single config file. All tracked in MLflow. Best adapter saved when it's done. The result? A compact, task-adapted model that understands your domain, follows your format, and responds in your language. Key Highlights: ▪️ Agent-driven search loop — Inspired by Karpathy's autoresearch. The agent reads a generated `program.md`, proposes a hypothesis, runs an experiment, reads results, and iterates. Up to N times. Fully autonomous. ▪️ Constrained by design — The agent edits exactly one file and exactly two dicts (`LORA_CONFIG` + `TRAINING_ARGS`). Nothing else. This makes every run safe, auditable, and reversible. ▪️ Full MLflow tracking — Every hypothesis, config change, and metric is logged. ▪️ Inference Lab — Compare up to 3 fine-tuned adapters side-by-side against the base model in a built-in UI. ▪️ Failure diagnosis — When a run scores poorly, the agent automatically diagnoses why (underfitting, wrong target modules, bad learning rate) and feeds that into the next iteration. ▪️ Can Run 100% locally — Toggle between Anthropic (Claude models) and Ollama per pipeline stage. Run the entire thing for $0. --- Supports: Classification · Extraction · Generation Models for fine-tuning: Qwen 2.5 (0.5B–1.5B), Phi-3 Mini, Llama 3.2 (1B–3B) --- It sits between AutoML and full autoresearch. This is the sweet spot for practitioners who just need a reliable small model for a narrow task. 🔗 GitHub → https://lnkd.in/d9S6Q3rw Would love your feedback — drop a ⭐ if this is useful, and let me know what tasks you'd fine-tune for! #MachineLearning #LLM #FineTuning #OpenSource #MLflow #LoRA #AI #NLP #GenAI
-
Jules Damji shared thisMLflow workspaces contribution from Matthew Prahl is a vital feature enhancement for organization, isolation, and access control.Jules Damji shared thisPrior to Workspaces, MLflow grouping stopped at the experiment level. Now, we’ve added a layer above it. 👆📦 MLflow Workspaces allow you to: 1️⃣ Isolate workstreams by team or project. 2️⃣ Assign resources (models, prompts, experiments) to specific workspaces. 3️⃣ Control access globally at the workspace level. As Matthew Prahl says: "Gone are the days of that really long experiments list that you keep on scrolling down." 🎥 Catch the full walkthrough on YouTube: https://lnkd.in/e_xF5aiD #LLM #GenAI #AIGateway #MLflow #PlatformEngineering
-
Jules Damji reacted on thisJules Damji reacted on thisIn a previous post, I wrote about how to sandbox your coding agents in containers so they can move fast without breaking everything [1]. Once you've got a sandboxed agent that works, the next question is: how do I run five of them on the same repo at the same time? The answer is Git worktrees. One command gives each agent its own branch in its own directory, no cloning, no duplication. Every agent works in its own lane, and they never step on each other. When they're done, you review, rebase and merge like any other PR. As you get comfortable running multiple agents, you also need visibility into what each one is doing. This is where Databricks AI Gateway comes in. Databricks AI Gateway gives you a single interface to manage all your coding agents (Claude, GPT-4, Gemini, and more) with guardrails to control what models can access, real-time monitoring of every request and response, and rate limits to keep costs in check, all without sacrificing model capabilities [2]. I break down the Git worktrees workflow and how AI Gateway governs each agent in a Medium article [3]. Links in the comments.
-
Jules Damji liked thisJules Damji liked thisPrompt engineering doesn't actually ban words. Neither does temperature. Or top-k. Or top-p. All of them reduce the *chance* a token appears. None of them set it to zero. In my new video, I build it from scratch in Python — showing you exactly where randomness enters an LLM, how softmax works at the math level, and the single line that makes a token genuinely impossible to generate. Covers allowlist and blocklist strategies, real gotchas, and why this matters for production guardrails. 35 minutes. No filler. 🔗 https://lnkd.in/dQF2zgAc #LLM #Python #NLP #GenerativeAI #MLOps #AIEngineering #ConstrainedGeneration
-
Jules Damji reacted on thisJules Damji reacted on thisI get asked sometimes whether I had a career plan early on. Honestly? When I look back, I think I just kept saying yes to the things that scared me a little. I came to Silicon Valley in 1999 with limited finances, no network, no roadmap, no fallback, like many engineers who made that same leap in those days. No safety net of any kind. Just a belief that if you worked hard and stayed curious, this country would meet you halfway. The American dream, in the most literal sense. No playbook. No one to call. Just judgment, and the pressure to develop it fast. That��s where I learned what building something means, what muscle memory really is, and what leadership actually is. It’s not conferred by a title. It’s earned through hard work and trust. And people decide whether they trust your judgment long before they ever care about your authority. Every unconventional move I made (and I made many!) looked risky from the outside. From the inside, I think it was the only way I knew how to grow. I say ‘has been’ because I’m not done. The building continues. The leaders I most admire didn’t climb a ladder, they moved across. Product, engineering, operations, strategy, GTM. Early-stage startups to scaled multinationals, and reset. They collected skills, battle-tested ideas, and built real-world perspective through experience, the way others collect promotions. And that range became their greatest competitive advantage. The roles that made me weren’t the comfortable ones. They were the ones where I was learning in real time, figuring it out as I went, and proving, mostly to myself, that I could. So if you’re looking at an opportunity that doesn’t fit neatly into your plan, feels a little too big, a little too unfamiliar…That’s probably the one.
-
Jules Damji liked thisJules Damji liked this🎤 DEOF 2026 Speaker Highlight We’re excited to have Boyang Jerry Peng, Staff Software Engineer at Databricks, speak at the Data Engineering Open Forum 2026 on April 16 in San Francisco! Session Title: Apache Spark: Structured Streaming Real-Time Mode Jerry Peng is a Staff Software Engineer at Databricks and a committer and PMC member of Apache Pulsar, Apache Storm, and Apache Heron. In this talk, he will dive into the evolution of Spark Structured Streaming, explain how Real-time Mode works, share insights into how we extended the Structured Streaming architecture to enable low-latency processing, and highlight how users are using it. — What else? Companies like Airbnb, Netflix, and OpenAI will be recruiting onsite, making this a great opportunity to explore new career opportunities. Join us on April 16 for a full day of insightful sessions, open discussions, and networking with the data engineering community! 📋 Agenda: https://lnkd.in/ga_6yA9w 🎟 RSVP: https://lnkd.in/ge_Bj48S 💡 Join the DET community to get 33% off DEOF tickets. Subscribe to our newsletter and find the code in welcome email: https://lnkd.in/eG5kmetq — Thank you to our sponsors for making this community event possible: Databricks, Astronomer, Airbnb,VeloDB (Powered by Apache Doris), Altimate AI, CelerData, Dremio, Matia, MinIO, Netflix, OpenAI, PuppyGraph, and StreamNative. #DEOF #dataengineering #data #softwareengineering #conference
-
Jules Damji reacted on thisJules Damji reacted on thisI had a wonderful time presenting at the Databricks Community Night hosted by RevoData! It was a pleasure to share my session on Building High-Quality AI Agents and discuss how we can move from experimentation to production-grade reliability using MLflow. A huge thank you to the organizers at RevoData for putting together such a seamless event and for being fantastic hosts (Leah Cullen and team, you’re rockstars). It was also a privilege to share the stage with the team from Eneco, who provided fascinating insights into energy consumption forecasting platform they built. To everyone who attended: thank you for the thoughtful questions and the great energy you brought to the room. You made it absolutely gezellig🇳🇱 Stay tuned to the upcoming events, meetups, and local chapters by keeping an eye on the official Databricks User Group page: https://lnkd.in/eWwFiFZw
-
Jules Damji liked thisLakebase Postgres is now GA on Microsoft Azure Databricks! This integrates with Entra, Agent 365 and the whole Azure stack! Very excited. If you don't know what Lakebase is, it's a new generation of databases for the agentic era, highly recommended reading: https://lnkd.in/geAae4qpJules Damji liked thisFor years, developers have had to bridge the gap between operational databases and analytics with brittle ETL pipelines. That data tax — duplicated storage, persistent lag, fragmented governance — ends today. Azure Databricks Lakebase is now generally available. Lakebase is a managed, serverless Postgres built natively into the Databricks Platform on Microsoft Azure, sharing the same storage layer as your lakehouse. Instant branching and zero-copy clones accelerate development without compromising governance. No pipelines to maintain, no data out of sync, one governance model across your entire data estate. https://lnkd.in/eq8p2C6W
Experience & Education
-
Anyscale
**** ********* ********
-
**********
****** ********* ********
-
********
****** * **********
-
*** ***** ******* **********
** ************** ********* ******** undefined
-
-
*** *****
*** ******** *******
-
View Jules’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Licenses & Certifications
Publications
-
Databricks Product and Engineering Publications
Databricks
See publicationOver 90+ product, open-source, conference, webinars, and engineering blogs.
-
Learning Spark 2nd Edition
O'Reilly
Book description
Data is bigger, arrives faster, and comes in a variety of formats—and it all needs to be processed at scale for analytics or machine learning. But how can you process such varied workloads efficiently? Enter Apache Spark.
Updated to include Spark 3.0, this second edition shows data engineers and data scientists why structure and unification in Spark matters. Specifically, this book explains how to perform simple and complex data analytics and employ machine learning…Book description
Data is bigger, arrives faster, and comes in a variety of formats—and it all needs to be processed at scale for analytics or machine learning. But how can you process such varied workloads efficiently? Enter Apache Spark.
Updated to include Spark 3.0, this second edition shows data engineers and data scientists why structure and unification in Spark matters. Specifically, this book explains how to perform simple and complex data analytics and employ machine learning algorithms. Through step-by-step walk-throughs, code snippets, and notebooks, you’ll be able to:
* Learn Python, SQL, Scala, or Java high-level Structured APIs
* Understand Spark operations and SQL Engine
* Inspect, tune, and debug Spark operations with Spark configurations and Spark UI
* Connect to data sources: JSON, Parquet, CSV, Avro, ORC, Hive, S3, or Kafka
* Perform analytics on batch and streaming data using Structured Streaming
* Build reliable data pipelines with open source Delta Lake and Spark
* Develop machine learning pipelines with MLlib and productionize models using MLflowOther authorsSee publication
Patents
-
Web site monitoring system and method
US WO 2002013018 A3
See patentMethods and systems for monitoring transactions and components on web sites are described. A universal monitoring module invokes a transaction agent associated with the web site by, for example, transmitting an HTTP GET command to a unique URL associated with that transaction agent. The transaction agent, which can be customized by the web site's owner to test the reachability and functionality of the web site components, performs at least one transaction on the web site and reports results…
Methods and systems for monitoring transactions and components on web sites are described. A universal monitoring module invokes a transaction agent associated with the web site by, for example, transmitting an HTTP GET command to a unique URL associated with that transaction agent. The transaction agent, which can be customized by the web site's owner to test the reachability and functionality of the web site components, performs at least one transaction on the web site and reports results using a standardized reporting format that can be readily parsed by the universal monitoring module. The universal monitoring module takes appropriate action, e.g., alerting web site responsible parties, based on the reported data. Using this type of monitoring methodology, a scalable yet customized monitoring solution is achieved.
Honors & Awards
-
The Netcenter Dedication and Contribution Award
Vice President Mike Homer
This award recognized my leadership and stewardship in deploying and supportiong a key component of Netscape's Member Directory Infrastructure.
-
The Java Cup International Achievement Award, 1996
Java Cup International Judges, including Dr. Eric Schmidt, Scott McNeally, Dr. Gosling, Bill Joy, Marc Andreessen, and Carol Bartz.
This award honored our group's successful completion of the world's first Java Cup International competition in which 2700 Java programmers, from around the world, submitted java applets for numerous computing categories. The Java Cup winners were recognized at JavaOne Conference.
-
The Team Award for Quality
Dr. Eric Schmidt
This team award recognized our collaboration software SparcWorks/TeamWare for its quality, robustness, and performance.
Recommendations received
5 people have recommended Jules
Join now to viewView Jules’ full profile
-
See who you know in common
-
Get introduced
-
Contact Jules directly
Other similar profiles
-
Jaime Rosales D.
Jaime Rosales D.
I'm a Developer Relations leader with over 14 years of experience in Developer Advocacy, API Evangelism, and Technical Community Building. My passion lies in helping developers succeed, whether through clear documentation, hands-on workshops, or direct collaboration with engineering teams to improve developer experiences.<br><br>As a global speaker, workshop lead, and hackathon organizer, I've led 50+ Cloud Accelerators, worked with AWS to develop Autodesk's first AWS QuickStarts—helping customers deploy applications in just 15 minutes—and explored AI-driven rendering techniques using APS Viewer, Stable Diffusion, and ComfyUI.<br><br>I love building communities, mentoring developers, and making complex technology more accessible. Whether leading a team, guiding a developer through API integrations, or speaking at industry events, my goal is always the same: empower developers and drive meaningful innovation.<br><br>Certified Scrum Master, graduate of the Autodesk Cloud Engineering Bootcamp, and a lifelong advocate for learning, collaboration, and developer enablement.
3K followersNew York, NY -
Nina Zakharenko
Nina Zakharenko
Experienced Software Engineer and Developer Advocate with a focus on Python and experience in creating and delivering worldwide keynote level technical talks.
2K followersSan Francisco Bay Area
Explore more posts
-
Redis
292K followers
LiteLLM is an open-source proxy that connects your app to LLMs through one unified interface. Paired with Redis, it gives AI and ML teams a simple yet powerful way to unify access to LLMs, accelerate response times, and make AI apps real-time. Redis handles performance, memory, and data coordination that modern AI apps demand, while LiteLLM handles abstraction and routing. Our AI Product Marketing Manager Rini Vasan dives into: ▶️ What LiteLLM is ▶️ Why it works well with Redis ▶️ How LiteLLM and Redis work together Learn how to scale your LLM gateway with LiteLLM & Redis: https://lnkd.in/gqsTebF7
32
-
KubeFM
7K followers
Julia Blase, Product Manager at Chronosphere, discusses how observability needs to evolve alongside Kubernetes. She explains that Kubernetes brought dynamic, scalable infrastructure, and monitoring systems must follow suit. The future requires on-demand instrumentation and dynamic data collection that can scale up and down based on needs, helping teams optimize costs while getting the right insights at the right time. Watch the full interview: https://ku.bz/X6tgWrG0P
2
-
Mihail Eric
Stanford University • 17K followers
Typically coding represents just 30% of engineering time. The remaining 70% is running software in production where complexity, tool silos, knowledge gaps, and interdependencies all collide. That’s why I’m so excited to have Mayank Agarwal, Founder and CTO of Resolve AI and co-creator of OpenTelemetry, and Milind Ganjoo, former Google Gemini technical lead, lecturing this week at our Stanford class 𝗧𝗵𝗲 𝗠𝗼𝗱𝗲𝗿𝗻 𝗦𝗼𝗳𝘁𝘄𝗮𝗿𝗲 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿. They will discuss lessons learned building Resolve AI, the most advanced AI devops agent in the world. They are leading the charge on using AI to tame production software systems. Their talk is this 𝗙𝗿𝗶𝗱𝗮𝘆 𝗮𝘁 𝗦𝘁𝗮𝗻𝗳𝗼𝗿𝗱 (𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝟰𝟮𝟬, 𝗿𝗼𝗼𝗺 𝟰𝟭) 𝗮𝘁 𝟴:𝟯𝟬𝗮𝗺. Join us!
43
7 Comments -
Imply
20K followers
Cribl Stream helps you control how data flows. Imply Lumi helps you control how data is stored and searched. Together, they deliver a modern observability architecture built for control, performance, and efficiency: - Route and retain what matters most - Search quickly and interactively across all retained data - Integrate seamlessly with your existing stack - Reduce costs without disrupting workflows Lumi isn’t just a faster backend — it’s part of a next-generation architecture that decouples storage, indexing, and query from the observability tools above. That means greater flexibility, less lock-in, and a solid foundation for what comes next. https://bit.ly/49iAXm1
18
2 Comments -
Sravan Sarraju
Oracle • 800 followers
In today’s world, being a strong generalist is a huge advantage. No matter where you work, adopt a builder mindset. Think like the founder of your space. Own the problem, wear multiple hats, and focus on delivering value to the end user. AI is accelerating everything. Those who can connect across domains and apply AI effectively will create the most impact. Being a generalist with a builder mindset is not just useful, it’s essential.
2
-
Titus Winters
Adobe • 3K followers
I've been citing and sharing this for a while already. I *strongly* recommend watching the full version of this talk. If you're already accustomed to platform engineering ideas I think it really takes off around the 17m mark, but the whole thing is a great summary. This captures *so* much important stuff, and I love the "shift-down" term. It's well understood that software architecture is the major influence of quality attributes (thanks George Fairbanks), and as we broaden from thinking of individual products to shared platforms, agentic workflows, cross-product functionality, the properties that have been shifted "down" into the platform are going to be the only ones we can really rely on at scale. (A bunch of whitepapers in security or compute efficiency have made the same point in different terms.)
66
2 Comments -
Causely
2K followers
If you are building or evaluating adding AI to reliability, this is a must watch. This episode of Slight Reliability breaks down where AI helps in SRE and operations, and where it falls short. Causely founder Shmuel Kliger, explains why modern observability turned operations into a big data problem, and why causal reasoning is required to infer the single upstream cause behind many degraded services. A practical conversation about reducing toil, avoiding AI hype, and focusing on what actually works. Full episode here: https://lnkd.in/epj_x7q6
20
-
Kapil Gupta
JioHotstar • 5K followers
As conversations at #AIImpactSummit2026 revolve around scaling AI responsibly, I’m especially proud of the rigorous work our Data Science team has delivered. Our latest write-up dives into how we built a scalable validation framework for video generation - measurable & production-grade evaluation across visual consistency, deformity detection and consistencies for script alignment. In the AI media space, generation is only half the problem. Validation is the real differentiator. Kudos to the Data Science team (Shreya Singh Preetham Gali Akshay Sharma Sanjai L Janani Ramaswamy), led by Sagar Tekwani for bringing scientific rigor to creative AI workflows. This is the kind of foundation that enables AI to move from experimentation to dependable production systems. Shoury Bharadwaj Vijay Seshadri #GenerativeAI #AIMedia #DataScience #ResponsibleAI
29
-
Reza Shafii
Kong Inc. • 5K followers
💡 𝗦𝘂𝗰𝗰𝗲𝘀𝘀𝗳𝘂𝗹 𝗔𝗜 𝗼𝘂𝘁𝗰𝗼𝗺𝗲𝘀 𝗱𝗲𝗽𝗲𝗻𝗱 𝗼𝗻 𝘀𝗼𝗹𝗶𝗱, 𝘄𝗲𝗹𝗹-𝗴𝗼𝘃𝗲𝗿𝗻𝗲𝗱 𝗔𝗣𝗜𝘀 — 𝘀𝗲𝗰𝘂𝗿𝗲, 𝗿𝗲𝗹𝗶𝗮𝗯𝗹𝗲, 𝗮𝗻𝗱 𝗱𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝗮𝗯𝗹𝗲 — 𝗮𝗻𝗱 𝗼𝗻 𝗺𝗼𝗱𝗲𝗿𝗻 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺𝘀 𝘁𝗵𝗮𝘁 𝗺𝗮𝗸𝗲 𝘁𝗵𝗼𝘀𝗲 𝗔𝗣𝗜𝘀 𝗽𝗼𝘀𝘀𝗶𝗯𝗹𝗲. That was the central theme of my keynote at the Kong API Summit in New York City 🇺🇸 two weeks ago, where I had the pleasure of sharing the stage with Demetry Zilberg (CTO, Alter Domus) and Rupesh Papneja (Principal Engineer, PEXA). 𝗪𝗵𝗶𝗹𝗲 𝘁𝗵𝗲𝗿𝗲’𝘀 𝗽𝗹𝗲𝗻𝘁𝘆 𝗼𝗳 𝗵𝘆𝗽𝗲 𝗮𝗿𝗼𝘂𝗻𝗱 𝗔𝗜 𝘁𝗼𝗱𝗮𝘆 🤖, 𝘄𝗵𝗮𝘁 𝘂𝗹𝘁𝗶𝗺𝗮𝘁𝗲𝗹𝘆 𝗱𝗲𝘁𝗲𝗿𝗺𝗶𝗻𝗲𝘀 𝘀𝘂𝗰𝗰𝗲𝘀𝘀 𝗶𝘀 𝘁𝗵𝗲 𝘀𝘁𝗿𝗲𝗻𝗴𝘁𝗵 𝗼𝗳 𝘁𝗵𝗲 𝗳𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 𝗯𝗲𝗻𝗲𝗮𝘁𝗵 𝗶𝘁 — 𝘁𝗵𝗲 𝗔𝗣𝗜𝘀, 𝗱𝗮𝘁𝗮 𝗳𝗹𝗼𝘄𝘀, 𝗮𝗻𝗱 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗰𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 𝘁𝗵𝗮𝘁 𝗺𝗮𝗸𝗲 𝗔𝗜 𝘂𝘀𝗮𝗯𝗹𝗲, 𝘀𝗰𝗮𝗹𝗮𝗯𝗹𝗲, 𝗮𝗻𝗱 𝘁𝗿𝘂𝘀𝘁𝘄𝗼𝗿𝘁𝗵𝘆. We explored what it takes to build that foundation: from 𝘁𝗵𝗲 𝗲𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗼𝗳 𝗰𝗹𝗼𝘂𝗱-𝗻𝗮𝘁𝗶𝘃𝗲 𝘁𝗼 𝗔𝗜-𝗻𝗮𝘁𝗶𝘃𝗲 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺𝘀, to the qualities developers need to design for reliability, governance, and speed without slowing innovation. 🚀 As part of that journey, we announced several new capabilities: • 𝗞𝗔𝗶, our 𝗮𝗹𝘄𝗮𝘆𝘀-𝗼𝗻 𝗔𝗣𝗜 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗮𝗴𝗲𝗻𝘁 that can detect issues, suggest improvements, and even open pull requests; • 𝗞𝗼𝗻𝗴 𝗘𝘃𝗲𝗻𝘁 𝗚𝗮𝘁𝗲𝘄𝗮𝘆, 𝗻𝗼𝘄 𝗚𝗔, bringing governance and cost efficiency to event-driven APIs; and • 𝗻𝗲𝘄 𝗔𝗜 𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝘁𝗼𝗼𝗹𝘀 that connect models, data, and services more seamlessly - leveraging MCP and more. It was great to be joined by Ross Kukulinski and Smriti Kawal Jaggi, whose demos made these ideas tangible and real. 🎥 If you missed the session, you can watch the keynote here: https://lnkd.in/gFG2TKmG Would love to hear your thoughts — how are you seeing API platforms evolve for the AI era? 💬 #APIs #AIInfrastructure #PlatformEngineering #APIGovernance #APIM #DigitalTransformation #APIDriven #KongSummit #AIInnovation
23
1 Comment -
Priyadarshi Das
Barclays • 2K followers
Why Guardrails Are Critical for Safe AI Adoption Generative AI has moved from a research novelty to an everyday reality. Tools like ChatGPT, LLaMA and Gemini are redefining how we write, code and create. But with great power comes great responsibility and that’s where guardrails step in. LLMs are brilliant, but they’re not flawless. They can hallucinate facts, amplify bias, leak sensitive data or even generate toxic content. Left unchecked, these risks can lead to misinformation, legal troubles or loss of user trust. So, what are guardrails? Think of them as AI safety layers programmable rules and systems that monitor and control what LLMs produce. They act during user interaction, filtering unsafe inputs and outputs in real time. A good guardrail doesn’t just block bad content; it ensures outputs are accurate, ethical and compliant. Here’s how the ecosystem looks today: ✅ Llama Guard (Meta) – Fine-tuned on the LLaMA architecture to classify content into predefined safety categories. Flexible for different use cases. ✅ NVIDIA NeMo Guardrails – Uses structured conversational flows with KNN-based intent matching and moderation to keep conversations on track. ✅ Guardrails AI – Adds type and format constraints (e.g., enforcing JSON structures), plus corrective prompting if outputs fail checks. ✅ TruLens – Focused on evaluation and feedback loops to improve context relevance, groundedness, and fairness in RAG-based apps. ✅ Guidance AI & LMQL – Programming approaches that combine logic and generation, letting developers enforce constraints using regex, control flows and real-time checks. Why does this matter? Because attackers are getting creative. Jailbreak attacks, prompt injections, and adversarial queries can still bypass basic safeguards. Some even exploit language patterns or use multilingual prompts to trick models into producing harmful content. Building strong guardrails isn’t just about adding filters it’s about system design. We need: ✔ Multi-disciplinary strategies combining neural and symbolic reasoning ✔ Continuous monitoring and evaluation ✔ Integration across the AI development lifecycle, like safety-critical systems in aviation or automotive (think ISO 26262) The ultimate goal is trust. Businesses, governments, and users will only adopt AI at scale if they believe it’s reliable, fair and safe. Investing in guardrails is not an afterthought it’s the backbone of responsible AI. What's Next? Guardrails need to evolve beyond simple filters. The future lies ✔ Neural + Symbolic Systems working together for deep reasoning ✔ Lifecycle integration-from design to deployment (think ISO 26262 for Al) ✔ Continuous monitoring to detect new attack patterns and biases #AI #LLM #ResponsibleAI #Guardrails #GenerativeAI #EthicsInAI #AITrust
1
-
Sharon Ashok
IBM • 194 followers
Excited to share a new blog that I co-authored with Sachin Kulkarni and Sayan Pal, now published on Medium! We explored how organizations can build a governed Data Product Marketplace using automated metadata enrichment powered by IBM watsonx.data. The blog covers key architecture concepts, governance practices, automation workflows, and how watsonx.data simplifies scaling enterprise data platforms. If you’re working with data engineering, governance, or AI adoption, this article may help you gain clarity on modern data product strategies. #IBM #WatsonxData #DataGovernance #DataMarketplace #MetadataManagement #DataProducts #AI #DataEngineering
11
-
Srinivas Nimmagadda
Over 20 years of engineering… • 998 followers
Interesting set of announcements from AWS (Swami Sivasubramanian) today about a comprehensive ecosystem for building and operating transformative AI Agentic applications (Bedrock, Strands SDK,..). Ashok Srivastava from Intuit gave an excellent example of what is possible with an AI platform approach (powered by data, agents, cross domain orchestration) to solve real-world #SMB problems. Multi-domain automation with data driven AI smarts to delight customers is the future! #AIAgents #FinTechInnovation #AmazonBedrock #AWS
16
1 Comment -
Danielle Dean, PhD
SimpliSafe • 4K followers
Grateful for the opportunity to speak at GAI World on a panel moderated by the incredible Luda Kopeikina and alongside two inspiring leaders, Sharna Sattiraju and Neha Shah. Each of us shared how our companies are leveraging AI in unique ways to drive innovation and impact. I had the opportunity to speak about how we at SimpliSafe are applying AI to enhance safety and security in people’s everyday lives. As our moderator Luda Kopeikina so powerfully said: “AI success requires more than algorithms. It’s about building robust systems, yes—but equally about building trust, governance, and culture. The winning formula is: focus on high impact cases, start small, show value, empower champions, create excitement and scale responsibly. Technology may be moving fast, but people, education, and trust remain the real accelerators.” I couldn’t agree more. Responsible, human-centered approaches are what will ensure AI drives positive impact in our communities. Grateful for the chance to share ideas, learn from such thoughtful women leaders, and highlight the meaningful role AI can play in building a safer future. #AI #WomenInAI #Leadership #Innovation #GAIWorld
132
1 Comment -
Chinmay Soman
StarTree • 2K followers
After pioneering user-facing and agent-facing analytics, Apache Pinot and StarTree are now taking on a new frontier: powering analytical serving workloads directly from Apache Iceberg. Check out this excellent deep-dive from Neha Pawar and Gnanaguru Sattanathan: https://lnkd.in/gmhaEKXn It explains how Pinot achieves high-throughput, low-latency reads from Iceberg and how this architecture lets you tune for your workload to strike the right balance on the cost-performance spectrum. Proud of the team for continuing to innovate and push the boundaries of what’s possible in real-time analytics! 🚀
85
-
Vineeth Loganathan
Thumbtack • 6K followers
Hear from Vijay Raghavan, Head of Applied Science at Thumbtack on how we are using AI to not just accelerate software development, but also as a tool to transform the home care market with multi-modal intelligence. To learn more about our GenAI strategy, refer to this article from our Engineering Leader Navneet Rao! https://lnkd.in/ekJ_KsZh
20
1 Comment
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content