Sign in to view eric’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view eric’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Portland, Oregon Metropolitan Area
Sign in to view eric’s full profile
eric can introduce you to 2 people at Deltek + ComputerEase
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
11K followers
500+ connections
Sign in to view eric’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with eric
eric can introduce you to 2 people at Deltek + ComputerEase
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with eric
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view eric’s full profile
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Services
Activity
11K followers
-
eric dransfeldt shared thiseric dransfeldt shared this🌟 Exciting Update for the Deep Signal Library! 🌟 Thrilled to unveil some powerful features in Version 1.6 that are set to amplify your trading game! 🚀 Here's the scoop on what's new: 1️⃣ Permutation Feature Importance: Dive deep into your models! Now, with Feature Importance, you can scrutinize the impact of added features on your models. Uncover the gems that truly contribute to model excellence. 🕵️♂️✨ 2️⃣ Dataset Weights for Precision: Achieve balance effortlessly! Introducing Dataset Weights, where you can assign weights to Profit Target and Profit Target Not Reached datasets. Whether you prefer automatic balance or want to customize weights, the power is in your hands! ⚖️💰 3️⃣ Trainer Customization Bonanza: Elevate your model creation process with a plethora of customization options! Explore the new Scda, Lbfgs, Fast Forest, Fast Tree, and Lgbm Trainer Options. Fine-tune to perfection with control over L1/L2 Regularization, Leaves, Trees, Feature Fraction, Example Count, Bin Count, and Learning Rate. 🛠️🌲 #deepsignaltech #ninjatrader #machinelearning #tradingstrategy
-
eric dransfeldt shared thiseric dransfeldt shared thisDeep Signal is excited to introduce our Training Data Viewer for the Deep Signal Library that allows the trader to visualize what data is being used to train their machine learning models. The chart will display where the Pre-Signal window, Signal Bar and Bars To Target window are located in relation to the data that was used to create a model. For more information on how to use the Training Data Viewer with the Deep Signal Library, please see our Online Help. #deepsignaltech #ninjatrader #machinelearning #tradingstrategy
-
eric dransfeldt shared thiseric dransfeldt shared thisDeep Signal is excited to announce the release of the Deep Signal Library 1.5. The new version includes Regression and Multiclass machine learning trainers for creating new financial trading models. Please download the latest Deep Signal Library and try out the new trainers. #NinjaTrader #TradingStrategy #DeepSignalTech
-
eric dransfeldt shared thiseric dransfeldt shared thisWant to use multiple machine learning models for trading different market conditions or triggers to enter a trade? The Deep Signal Library can now use multiple models in one strategy. Please download the latest Deep Signal Library to try it out! #NinjaTrader #TradingStrategy #DeepSignalTech
-
eric dransfeldt shared thisGreat list of courses! Thank you for sharing!eric dransfeldt shared thisSo you are looking for Machine Learning courses on Youtube? Sure, here you go! You are welcome! - Andrew Ng CS229 ML: https://lnkd.in/gkDEyuCS - MIT: Deep Learning for Art: https://lnkd.in/grusgt3Z - Stanford CS230: Deep Learning: https://lnkd.in/ggXNEX7K - Practical Deep Learning for Coders: https://lnkd.in/giHMNrHG, https://lnkd.in/gDtRtHmG - Stanford CS224W: Machine Learning with Graphs: https://lnkd.in/grZC_j4N - Probabilistic Machine Learning: https://lnkd.in/gjSpNDCD - MIT 6.S191: Introduction to Deep Learning: https://lnkd.in/gWtSdkSH - UC Berkeley CS 182: Deep Learning: https://lnkd.in/gzHS6m8G - UC Berkeley Deep Unsupervised Learning: https://lnkd.in/gPdPbKku - Yann Lecun's NYU Deep Learning SP21L: https://lnkd.in/gdyzmf8b - Stanford CS25 - Transformers United: https://lnkd.in/gaZVn3wY - Hugging Face NLP Course: https://lnkd.in/gigfE2Yj - Stanford CS224N: Natural Language Processing with Deep Learning: https://lnkd.in/g4fg4_wX - CMU Neural Nets for NLP: https://lnkd.in/gVpUwtXE - Stanford CS224U: Natural Language Understanding: https://lnkd.in/gMeGkkzV - CMU Advanced NLP: https://lnkd.in/gAtrsGqY - CMU Multilingual NLP: https://lnkd.in/ghbcWftV - Stanford CS231N: Convolutional Neural Networks for Visual Recognition: https://lnkd.in/g3DeCWEc - Michigan Deep Learning for Computer Vision: https://lnkd.in/gbdgGgJQ - AMMI Geometric Deep Learning Course: https://lnkd.in/gYH6Vuum - UC Berkeley CS 285 Deep Reinforcement Learning: https://lnkd.in/gH-HYdqz - Intro to Deep Learning and Generative Models: https://lnkd.in/gxuTtkSk - Stanford CS330: Deep Multi-Task and Meta Learning: https://lnkd.in/gasntdBh Source: https://lnkd.in/gys5Rk5k Looking for career mentoring and/or Machine Learning consulting services? Let's chat: https://lnkd.in/gGBMXuR4 #machinelearning #deeplearning
-
eric dransfeldt shared thiseric dransfeldt shared thisWe're excited to announce the Deep Signal Library version 1.3 has been released. It includes the ability to choose what trainer to use when creating a machine learning model. You can still have the Deep Signal Library try all available trainers when creating a model to find the best performing algorithm. The progress window that is displayed when creating a new machine learning model has been updated. It shows the best performing trainer in real time as the model is being created. It also will keep track of each trainer's performance so the user can look to see how each trainer performed during the training run. If you find some trainers do not work well for that dataset then you can deselect them for future training runs. #deepsignaltech #ninjatrader #tradingstrategy
-
eric dransfeldt shared thiseric dransfeldt shared thisThis eBook takes a deep dive into many common software architecture patterns. Free, our gift to you.
-
eric dransfeldt shared thiseric dransfeldt shared thisWe are excited to announce that the Deep Signal Machine Learning Library for NinjaTrader is available for download. After two years of development and a successful beta program, the Deep Signal Library is available for traders who want to create and use machine learning models for financial trading. The library is an extension of NinjaTrader that automates the process of creating a machine learning model that can be used in trading. #machinelearning #trading #ninjatrader #deepsignaltech
-
eric dransfeldt shared thiseric dransfeldt shared thisWe are hiring research interns at Microsoft Mixed Reality! If you are a PhD student interested in deep learning and computer vision, and want to see how research gets turned into world-class devices like the #HoloLens, consider submitting your resume at the following link: https://lnkd.in/ghJngRM #microsoft #hiring #computervision #deeplearning
-
eric dransfeldt liked thiseric dransfeldt liked thisOver 80% of published investment factors are likely false. In a new paper, Frank Fabozzi and I argue that the core problem is not only multiple testing. It is identification failure. Our work reconciles the apparent contradiction between academic claims and investors' experience: it answers why academic studies estimate that the False Discovery Rate (FDR) in finance is only 10%, while investors experience that most factor investment funds fail to perform. The reason is, published statistics are not outcomes of a single trial. They are the result of selecting the most favorable outcome among many within-study candidate specifications. That changes the statistical experiment. And once the experiment changes, standard inference breaks. Main messages: - The FDR cannot be identified from published in-sample statistics alone - Under standard selection rules, single-trial inference understates the true FDR - Very different underlying worlds can generate very similar reported significance profiles - Under an explicit search-adjusted parametric model, the fitted FDR exceeds 80% Practical implication for academics, practitioners, and investors: Statistical significance is not enough Reliable inference requires either: - an explicit model of the search-and-selection process, or - genuinely independent validation data A broader lesson is that complexity is not a virtue when it merely hides specification search. A more reliable path is causal factor investing: signals grounded in economically coherent mechanisms, not just in-sample winners. Paper: https://lnkd.in/d96CEN8z Data & Code: https://lnkd.in/drWephNi #Finance #AssetPricing #FactorInvesting #Quant #MachineLearning #Statistics #FalseDiscoveryRate #BacktestOverfitting #CausalInference #ResearchGitHub - lopezdeprado/FDR-in-Finance: Code for replicating the paper "The False Discovery Rate in Finance: Identification Failure and Search-Adjusted Estimation".GitHub - lopezdeprado/FDR-in-Finance: Code for replicating the paper "The False Discovery Rate in Finance: Identification Failure and Search-Adjusted Estimation".
-
eric dransfeldt liked thiseric dransfeldt liked thisMost teams deploy LLMs with default settings and wonder why inference costs $50K/month. The optimization stack exists. Most engineers don't know the layers. Here's the full inference optimization hierarchy: LAYER 1: Serving architecture Before touching a single kernel, get your serving right. vLLM (74K ⭐): PagedAttention, continuous batching. https://lnkd.in/eeT_HM2B SGLang (25K ⭐): structured generation + RadixAttention. Faster for constrained outputs. https://lnkd.in/eKK7sxdf LAYER 2: Quantization Shrink the model without killing accuracy. llama.cpp (92K ⭐): GGUF quantization. Run 70B on consumer hardware. https://lnkd.in/eJrUg_qd Unsloth (50K ⭐): QLoRA fine-tuning at 70% less VRAM. https://lnkd.in/gJZtH4Y4 This layer alone can cut your GPU bill in half. LAYER 3: Attention + caching How much are you spending on redundant prefill? Flash Attention (21K ⭐): memory-efficient, IO-aware. Non-negotiable. https://lnkd.in/eYkuRuxC LMCache (1.5K ⭐): KV cache sharing. Eliminates it entirely. github.com/LMCache/LMCache LAYER 4: Hardware-specific acceleration Match your optimization to your silicon. TensorRT-LLM: purpose-built for NVIDIA GPUs. Kernel fusion, in-flight batching. https://lnkd.in/ekuFuDAP MLX: native framework for Apple Silicon. Inference without CUDA. github.com/ml-explore/mlx LAYER 5: Custom kernels Where the real differentiation lives. LeetCUDA (9K ⭐): 200+ CUDA kernels. Tensor Cores, HGEMM. https://lnkd.in/eUfgpwW6 llm.c (28K ⭐): Karpathy's raw C/CUDA. The fundamentals. github.com/karpathy/llm.c Engineers who write custom kernels command $200K+ at NVIDIA, Meta, and Google. LAYER 6: Distributed inference When one node isn't enough. NVIDIA Dynamo: multi-node orchestration. Disaggregated serving. https://lnkd.in/etBGNtjk exo (39K ⭐): distributed inference across consumer devices. github.com/exo-explore/exo 6 layers. Each one multiplies the savings from the layer above. Most teams stop at Layer 1. The ones running inference profitably reach Layer 5. Which layer is your team stuck at? 👇 💾 Bookmark this. Your next inference bill will thank you.
-
eric dransfeldt liked thiseric dransfeldt liked thisi'm starting a newsletter called Owners, Not Renters - because closed AI isn't winning because it's better. it's winning because it's easier. we've been here before. a dominant platform, everyone locked in, and then, eventually, a shift. open becomes the practical choice, not just the principled one. that shift is starting! 70b models on a mac. ollama pull and you're running. developers swapping out cursor, swapping out claude code, for opencode and local weights and local context. your data never leaves your machine. the defaults are hardening. that's the part that keeps me up. i want to see wonderful things built on open. weird, surprising, new experiences we can't imagine yet. but you don't get there until you get the tools right. maslow's hierarchy. before wonderful comes: how do i actually get this running? i don't know yet. i'm trying to figure it out. i'll tell you what i'm learning - what models are actually good, where the stack is still duct tape, what's working and what isn't. and i want to hear what you're learning. subscribe. reply to the first one. i'm reading everything. link in comments.
-
eric dransfeldt liked thiseric dransfeldt liked thisAn open-source alternative to Claude! (18k+ stars; works with any LLM) Onyx is a self-hostable AI chat platform that works with any LLM, like Claude, GPT, Gemini, Llama, or any open-weight model you want. Here's what it ships with: - Agents that chain multiple tools in sequence - RAG with full indexing across 40+ connectors (Slack, Drive, Confluence, Jira, GitHub, email, call transcripts) - Deep research ranked #1 on DeepResearchBench, above every proprietary alternative - MCP support for connecting to external systems - Code interpreter for data analysis and file generation - Self-host on your own infrastructure via Docker in a few mins Unlike Claude's MCP-based connectors that query your tools at runtime, Onyx actually indexes all your data. That means faster, more reliable search across everything your team has ever written. The entire code is open-source under MIT license, so you can see the full implementation on GitHub and try it yourself. Link to the repo in the first comment! _____ Share this with your network if you found this insightful ♻️ Follow me (Akshay Pachaar) for more insights and tutorials on AI and Machine Learning!
-
eric dransfeldt liked thiseric dransfeldt liked thisAnthropic is coming to Portland, yes THAT Anthropic... pt.2! Our first Claude event hit 500+ RSVPs and sold out in HOURS - so we're running it back for pt.2! If you didn't get a chance to be in the room the first time, here is your chance now. We're hosting a Claude meetup to show the world what we're capable of. This is our chance to put Portland on the map + prove that we DO belong in the AI conversation. None of this happens without this incredible group of people - massive shoutout to Dinesh Mathew, Abraham N., Todd Greco, Dave Barcos, Sam Keen, Joanna Gough, Natalie Tashchuk, Grant Macdonald, Dominic Kuklawood, Rick Turoczy, Mike Biglan, M.S. + many more. These people showed up when it mattered and are bringing our communities together in the AI era. Now it's your turn. Seats are extremely limited. This will sell out. Period. Comment "Claude" and I'll send you the link to apply. (Make sure we're connected)
-
eric dransfeldt liked thiseric dransfeldt liked thisIt was an honor to hang out with Jensen Huang, CEO of NVIDIA, and do a long-form podcast with him. Really fun & fascinating technical deep-dive conversation on & off the mic. One of the most brilliant & thoughtful human beings I've ever met. NVIDIA is the most valuable company in the world by market cap and is the engine powering the AI revolution. Podcast probably out tomorrow (Monday), unless I get stuck in too many interesting conversations while running around in SF ;-) PS: I haven't checked my messages in days. Sorry for slow replies 🙏 Trying to stay deeply focused in an overwhelmingly intense time & barely hanging on. Love you all! ❤️
Experience & Education
-
Deltek + ComputerEase
********* ******** ********
-
**** ****** ************
*********
-
**********
********* ******** ********* *********
-
****** ***** **********
** ******** ******* undefined
View eric’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Patents
View eric’s full profile
-
See who you know in common
-
Get introduced
-
Contact eric directly
Other similar profiles
Explore more posts
-
William R.
3K followers
If there’s any low level coders in my network please check out Troy’s GitHub and his project to build a new, faster shell! Some details on Troy: ### Windows Accessibility and TTS Contributions (Red Eagle Team) - Description: As part of the Red Eagle Team (a Georgia-backed initiative with Microsoft support in the mid-2000s), Mallory developed foundational code for universal Text-to-Speech (TTS) systems and cross-platform handicap accessibility features. This included compatibility layers for screen readers, app navigation, and voice input (e.g., influencing tools like Dragon NaturallySpeaking). - Impact on Windows: Microsoft adopted and evolved his logic starting with Windows Vista, integrating it into core OS features for impaired users. If you've used modern Windows TTS, clipboard integrations, or basic accessibility navigation, elements trace back to this work. - Modern Extensions: He's currently exploring IoT/smart home integrations for disabled veterans, building on this foundation. Troy’s Post “Okay guys think tank. I maybe over complicating this or may habe to do a bit of tweaking with some C code. Rubian has been going smoothly. The new boot and dynamic system discovery deamons work great. As standalone. The new stack needs tweaking for efficiency but works and is stable. Here's the stack im working on with the prototype test. /etc/rc.local ->calls boot.rb as bg process. ->boot.rb bootstraps rubian's system discovery and array logic. User login -> rubian3.rb (repl) -> chsh -s ./rubian3.rb USER Rubian boot.rb logic. ->starts C ruby and Jruby C ruby starts the bootstrap process while jruby warms up. -> Cruby passes workload to jruby when its ready to finish boot system. The deamon system is the server-side, where shells and env are the client. (See my UNIX socket prototype here) https://lnkd.in/gf6rWs46 C and j ruby can now natively talk to each other. I know that works. The issue im having is the new login shell isnt hooking into the deamon when it loads. https://lnkd.in/g5_2ZuPH A further explanation: “Yes. I've been building from the Shell working down to the kernel. The Shell side of things mimic Bash but in ruby expression. On the logic side of things, ruby treats everything as an object. What im doing is taking all the system information and building Arrays. Each directory and file then has an index that we can iterate through with .each_with_index, .each_with_object, etc. So instead of a static file tree, i have arrays of nested file paths and directories being pattern matched. Any ruby app will be ran and communicate to it. This allows for dynamic file utils systems and more. Each array index points to a file, directory, process, thread, interpreter, shell, hardware, everything that is a defined method object.”
3
-
Optavyn
52 followers
👨💻 We don’t micromanage at Optavyn. Every dev here knows they’re not “just writing code”, they’re building products that matter. Every feature we ship solves a real-world problem. Every API we build connects with a user’s workflow. Every screen we design has intent behind it. Our team works remote, but we’re aligned, focused, and committed to the end goal: Delivering results not just tasks. If you're a founder or agency who values clear ownership, clean code, and reliable delivery, we might be the team you’ve been looking for. #TeamCulture #RemoteTeam #SoftwareProduct #OptavynCulture #ExecutionMatters #TechThatWorks
4
-
Adam Millbery
Wayve • 2K followers
📰 📣 "September 18, 2025 – Wayve, the leader in Embodied AI for autonomous driving, today announced it has signed a letter of intent with NVIDIA to evaluate a $500 million strategic investment in Wayve’s next funding round." https://lnkd.in/eX5jJdbE #wayve #nvidia #investment #embodiedAI #AI #AV #autonomousdriving
7
-
Jim Bright
GTN Technical Staffing • 9K followers
Code review becomes more demanding when the context behind the code is missing. When engineers write code themselves, the reasoning behind the structure, tradeoffs, and edge cases live in their head. AI-generated code arrives as finished output, but without that reasoning. Developers inherit the implementation but not the decision process. A survey from Harness found: • 67% of developers spend more time debugging AI-generated code • 68% spend more time reviewing it #AIContextGap #TeamGTN #BrightBombs
-
Gueri Segura
Tenmas.Tech LATAM • 9K followers
CTO: I shared a thought in a private engineering leadership group about AI speeding up coding and didn’t expect the response it got. Dozens of CTOs and senior engineers weighed in. Lots of disagreement. Lots of nuance. The takeaway that stuck with me: AI didn’t break engineering. It exposed it. Teams with strong DevEx, clear reviews, and healthy discipline seem to get more leverage from AI. Teams with weak reviews, unclear requirements, or coordination gaps feel more friction — not less. In most replies, the problem wasn’t “AI wrote bad code.” It was: – reviews that don’t scale – QA lagging behind – context getting lost – commit → deploy becoming the real bottleneck Faster coding just moves the constraint. Curious how others are seeing this play out inside their teams. #EngineeringLeadership #AIinEngineering #DevEx #CTO
2
-
Bron Davies
ProPlans • 1K followers
Section 174 of the U.S. tax code, which, starting in 2022, required software development costs - including salaries of engineers - to be amortized over five years for U.S. workers and 15 years for foreign, rather than being deducted in the same year. This policy disproportionately hurt small and bootstrapped tech companies, resulting in higher tax burdens even when no profits were made. However, a recent legislative update - part of Trump’s “Big, Beautiful Bill” - reversed this requirement for U.S.-based developers, allowing same-year deductions again and enabling companies to retroactively refile taxes for 2022–2024 to claim refunds. Despite this relief, the amortization requirement still applies to foreign development work, which will likely result in U.S. companies reducing international hiring and shift more software development back to the U.S. https://lnkd.in/gDKWPthG
9
2 Comments -
Conor McGann
2K followers
I’ve been spending some sabbatical time at sea—reflecting, recharging, and diving deep into large language models (LLMs) and how robust systems might truly leverage them. Over the past 30+ years, I’ve worked on AI applications spanning machine learning, NLP, symbolic reasoning, and task/motion planning—applied in domains from contact centers to underwater robots. These were systems that had to work—some fully autonomous, some with humans in the loop. Only recently did I come across the term “Jagged AI”—a useful lens for thinking about LLMs: powerful but uneven, capable in unexpected places and brittle in others. This presents both an opportunity and an engineering challenge: how do we design systems that harness their strengths while maintaining the rigor and robustness real-world applications demand? This video by Nate Jones adds a valuable perspective: https://lnkd.in/dfQWAqNb From what I’m seeing, hybrid architectures—combining symbolic and neural methods, often with humans in the loop—are not just interesting but essential. Curious to hear from others: Who’s building mission-critical systems that include LLMs? How are you tackling robustness?
37
13 Comments -
Harton Wong
TABot • 1K followers
Great framing. Technical accountants have always known the hardest part isn’t finding guidance, it’s excluding the noise. That’s where most AI falls short. Accountants shouldn’t have to guess how much context is “enough” for a model. This is what TAbot is built for. It continuously curates and feeds the right amount of high-signal context to the AI, so accountants get leverage without having to become prompt engineers. AI should adapt to accounting judgment, not the other way around. Highly recommend the read: https://lnkd.in/gxv3kiJx
2
-
Pallav Singh
Microsoft • 13K followers
I recently built a toolchain for Clang (branch: bloomberg/clang-p2996) from upstream Clang compilers about 5 days ago. However, I’m running into issues with C++26 reflection -- it’s not compiling as expected. If anyone has successfully built or used reflection features on this branch, I’d appreciate any insights or guidance. root@pallav-VMware-Virtual-Platform:~# clang --version clang version 21.0.0git (https://lnkd.in/g3jJS4fU 2ea0a79fe7bb5f6fdb8c687ba0e21ab63696e7f7) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /root/clang-p2996/build/bin root@pallav-VMware-Virtual-Platform:~# root@pallav-VMware-Virtual-Platform:~# clang++ --help | grep -i reflect -fentity-proxy-reflection Enable proposed reflection of entity proxies as described by P3XYZ -fparameter-reflection Enable proposed parameter reflection as described by P3096 -freflection-latest Enable all reflection features supported by Clang/P2996 -freflection Enable proposed C++26 reflection as described by P2996 root@pallav-VMware-Virtual-Platform:~# ls root@pallav-VMware-Virtual-Platform:~# root@pallav-VMware-Virtual-Platform:~# /root/clang-p2996/build/bin/clang++ \ -std=c++2c \ -freflection-latest \ -stdlib=libc++ \ -nostdinc++ \ -I/root/clang-p2996/build/include/c++/v1 \ -L/root/clang-p2996/build/lib \ pallav.cc -o pallav pallav.cc:1:10: fatal error: 'meta' file not found 1 | #include <meta> | ^~~~~~ 1 error generated. #cplusplus #cpp #CPlusPlus26 #ModernCpp #CppReflection #metaprogramming
4
2 Comments -
Alex Telford
Floating Rock Studio • 2K followers
A bit of fun while I wait for initial feedback on my book. I've packaged up some of my python qt utilities to make building Qt widget UIs easier. Key Features: * Property Bindings - Bidirectional bindings - Expression bindings (like qml, but in widgets) * Data Mapper - Maps abstract user role data to widgets * Paint layouts - Simplified anchor based paint utils - Layout based painter for using QLayout in paint events * Widgets - Float Slider - Range Slider Currently v0.1.0 beta, more widgets and features to come. Check it out here: https://lnkd.in/gwWmGVmR
69
4 Comments -
Kiran Grandhi
Modlix • 3K followers
🚀 Big moves in the JavaScript world! The latest TC39 updates bring 9 new proposals into the spotlight and some of them are truly game changing. From native .at() support on typed arrays to new syntax like do {} expressions, the language is evolving to be cleaner, safer, and more expressive. These improvements aren’t just academic they’ll shape how we write, debug, and think in JS. Read the full breakdown here: https://lnkd.in/gpejsmcv #JavaScript #WebDev #TC39 #ESNext #Programming
1
-
Mathieu Kessler
Technip Energies • 1K followers
New drop: 19 prompts for running multi-agent Claude Code teams. If you've started using Claude Code for real work, you've probably hit the wall where one agent isn't enough. Parallel file changes. Conflicting decisions. No handoff structure. Tasks that take 20 minutes ballooning into 2-hour debugging sessions. The answer isn't to work harder with a single agent. It's to run a team and run it properly. Today I dropped AGENT-001: Claude Code Team Playbook, 19 operational prompts covering the full lifecycle: ▸ Decide & Dispatch: know before you start whether you even need a team ▸ Orchestrate & Execute: Plan-Then-Swarm, Delegate Mode, Adversarial Code Review, Progressive Delegation ▸ Quality Gates & Hooks: automate validation when teammates go idle or complete tasks ▸ Recover & Optimize: stall detection, drift recovery, context window management, retrospectives + A full learn guide The theory behind the prompts. When multi-agent makes sense, how to structure teams, patterns that work, and the anti-patterns that will waste hours of your time. Everything's free. No signup. 📦 Pack → https://lnkd.in/drzM5piT 📖 Guide → https://lnkd.in/dmFJ-gfv This is the third pack in the Claude Code series on NerdyChefs, alongside CLAUDE-001 (effort level optimization) and MCP-001 (building & securing MCP servers). If you're using Claude Code in production and want to talk about scaling your AI engineering setup, drop me a message. #ClaudeCode #AIEngineering #MultiAgent #Anthropic #DevOps
13
-
Gunnar Morling
Confluent • 9K followers
Seeing quite a few discussions lately about Kafka/Iceberg integrations being "zero-copy" or not. I think this is largely missing the point. First, where I agree is that this integration should be "zero-effort" for users. Materializing a Kafka topic into an Iceberg table shouldn't require more than a click of a button. Queries should provide a uniform way for accessing the data in both a topic and the corresponding table. This is the stream/table duality, and it should Just Work™. Now, whether this requires to store the bytes of data once in a Kafka topic, and a second time elsewhere for table access, shouldn't really matter from a user perspective. I'd argue storing the data twice is actually a benefit, and in fact it's a pattern well established: it resembles the design of WAL and table files known from databases for decades. I don't think anyone ever complained about this structure in their RDBMS? Which makes sense, it's an implementation detail, opaque to users. But as it turns out, having log and table data separately is even more advantageous for the deconstructed database that is Kafka and Iceberg: you can have multiple readers of the same log (Kafka topic), materializing views in multiple destinations and systems optimized for specific use cases. Maybe multiple Iceberg tables with different projections (think PII), maybe an Iceberg table and a full-text index in Elasticsearch, maybe an... you catch my drift. Furthermore, the log is replayable, so you can recreate views if needed, or you can implement new use cases you didn't originally have in mind. All in all, I think "zero copy" is mostly a red herring. Sure, it can be an optimization for certain scenarios, but mostly it's a distraction from the immense value you get from combining Kafka and Iceberg when done the right (seamless) way.
127
26 Comments -
Jack Vanlightly
Confluent • 3K followers
Duplication avoidance (zero copy) is just one aspect of "trade off optimization" in data system architecture. Any data system design must balance cost, with performance and complexity, against a number of constraints. So while there are benefits to "zero copy", it is one aspect of a much larger architectural discussion. For example, we may choose to one or more partial copies for indexes to boost read performance, we may choose to store data in both row and columnar formats. We may store precomputed data and raw data. The list goes on. So I kind of agree that zero-copy itself being a major focus can be too limiting, simplistic and an artificial battleground.
41
4 Comments -
Rick Wise
CloudWise • 4K followers
MSK (Managed Kafka) looks simple on the bill until you pull it apart. Most teams know they pay for broker hours, which range from ~$0.043/hr for a kafka.t3.small to over $10/hr for a kafka.m5.24xlarge. A common production setup with three kafka.m5.large brokers costs around $460/month — that's 3 brokers × $0.21/hr × 730 hours — just for compute. However, many overlook the ongoing costs of storage at $0.10 per GB per month and the potential charges for data transfer, which is $0.01 per GB each way for cross-Availability Zone (AZ) traffic. An often missed aspect is the waste incurred from idle clusters. If no messages are processed over a seven-day period, as detected by CloudWatch, the cluster is effectively idle — and unlike EC2, MSK clusters cannot be stopped, only deleted. Common scenarios that lead to idle clusters include deprecated streaming pipelines or development clusters mirroring production setups left running without a purpose. Taking a moment to regularly review cluster activity can prevent unnecessary charges and ensure that resources align with current application needs. CloudWise detects idle MSK clusters automatically so you don't have to audit them manually. #AWS #AWSMKS #CloudWise #CostOptimization #FinOps
5
1 Comment -
First Resonance
5K followers
Last week at First Resonance, we got to see something refreshing: AI used for real work—not hype. Actual workflows. Real manufacturing problems. Practical answers. The El Segundo Hardtech community team walked through how Claude can support engineering and manufacturing teams—from parsing technical documentation to helping teams move faster through operational challenges. What stood out most wasn’t just the capability—it was the curiosity in the room. Engineers asking sharp questions. Operators thinking through real-world use cases. Founders connecting the dots to their own production environments. Huge thanks to Martine N., Ado Kukic, and the entire Anthropic team for bringing depth, clarity, and a genuinely thoughtful approach to AI in manufacturing. #Manufacturing #AI #HardTech #Engineering #FirstResonance #Anthropic
11
-
Daniel Karpienia
Otherlife • 3K followers
Dank Dispatch #0017: Compute, Compliance, and the Volatility Reset AI is absorbing idle mining infrastructure. The SEC is defining ETF rails. ETH skew has flipped bullish, despite low volatility. The signal is clear: systems are now judged by how well they behave under constraint. Compliance isn’t an afterthought—it’s architecture, and AI-powered compute is no longer reserved for crypto alone. 👉 Full dispatch: https://lnkd.in/gi5RWesu #Web3 #Crypto
-
Robb Christenson
Peerless Search Partners • 7K followers
Looking for signs of optimism as we head into spring? Software Engineer job postings are up 11% year over year. This was one of the jobs that several "experts" have identified as dying due to AI. But the numbers are telling a different story. So...Is this a bounceback or a dead cat bounce?
2
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content