At Procter & Gamble, Tatiana Cortés Cardona once spent nearly a month integrating data for a single brand — cleaning it, connecting it, extracting insights. Last week at our Mindstone meetup, she showed how she now does the equivalent in minutes.Tatiana is Europe Marketing Director at Air Global, a former P&G and Kellogg's marketer, and an AI consultant working across Europe and South America. Her talk — 'Beyond Prompts: Choosing the Best AI for Multimedia Marketing Intelligence' — was built around a live demo, not slides full of theory. She fed Gemini a real mix of assets: McKinsey's State of Fashion report, a Kantar deck, an Excel market overview, and several YouTube videos on trends and sustainability. The output? A full market intelligence report with consumer profiles, trend forecasts, opportunity maps, and white space analysis — all cross-referenced across sources. A few things from the talk worth sitting with: 📍 Tool choice matters more than most people realise. ChatGPT is strong for content creation. Claude for organic written communication. But when you need to ingest documents, spreadsheets, and video simultaneously, Gemini is in a different category. 📍 The language barrier becomes much less of a problem. Tatiana manages markets across Europe without speaking German. She uses Gemini to pull and analyse scripts from German YouTube creators — giving her competitive intelligence she'd otherwise never access. 📍 The core skill is shifting. It's no longer just about knowing how to connect the dots. It's about knowing which tool connects them for you — and asking it the right questions. This is exactly the kind of practical, experience-first knowledge sharing that makes our community worth showing up for. Watch the full talk and live demo here: https://lnkd.in/gb25N_BB #PracticalAI #FutureOfWork #MarketingIntelligence
Mindstone
Software Development
The simplest way for professionals and teams to learn #PracticalAI and stay ahead at work.
About us
Mindstone is the learning technology platform that helps professionals and teams move beyond basic AI use to drive high-impact applications in their work. We provide #PracticalAI education that aligns with business goals and uncovers transformative opportunities across teams. Our programs help professionals and companies: ✅ Identify and implement mission-critical AI use cases ✅ Build practical AI skills for confident, effective daily use ✅ Drive adoption that turns AI potential into measurable business outcomes ✅ Stay ahead as AI technology evolves Mindstone bridges the gap between AI potential and real-world business results —making AI relevant, contextual, useful, and impactful.
- Website
-
https://www.mindstone.com/
External link for Mindstone
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- London
- Type
- Privately Held
- Founded
- 2020
- Specialties
- Learning, Practical AI, Professional Education, Learning & Development, Education, Skill Certification, Skill Proficiency Assessment, AI Skill Training, AI Skill Proficiency, AI Skill Proficiency Assessment, Software Engineering, Learning Science, Education, Technology, Internet, L&D, HR, Content, Training, E-learning, and Leadership Development
Locations
-
Primary
Get directions
61 Amhurst Road
Capital House
London, E8 1LL, GB
Employees at Mindstone
Updates
-
The question businesses are asking about AI has quietly shifted.It used to be: "Are you using AI?" Now it's: "Can you prove you're securing it?" At our latest Mindstone meetup, Jason Holloway — with 30+ years in information security — shared a simple analogy that framed everything: Two explorers meet a lion. One of them starts lacing up his running shoes. The other says, "You can't outrun a lion." He replies: "I don't need to. I just need to outrun you." Perfect AI security doesn't exist. But being demonstrably better than your competitors? That's achievable right now — and it's already showing up as a differentiator in client tenders and procurement questionnaires. Jason broke down the practical risks organisations are navigating today: 📌 Shadow AI — tools your teams are quietly using without IT sign-off 📌 Stealth AI — AI features embedded into software you already approved, without you noticing 📌 The growing expectation from clients and regulators that you can evidence how you're protecting their data inside your AI systems He also shared a lesson from Barclays, who rolled out 10,000 iPads across their branches — and watched it fail. Not because of the technology. But because nobody helped the people already working there feel confident using it. Sue in the front office didn't need better software. She needed a colleague she could turn to. The fix was cultural, not technical. The same principle applies to AI security. If your organisation is building with AI, the evidence trail you're creating (or not creating) today will matter when something goes wrong tomorrow. Jason's full talk — including how to start building that evidence using existing control frameworks — is now available in the Mindstone community: https://lnkd.in/gkDVci6g This is exactly the kind of practical, experience-led knowledge sharing that makes our community worth showing up to. #PracticalAI #FutureOfWork #AIGovernance
-
Every knowledge worker is about to become a manager. Not of people — of AI colleagues. That's not a distant prediction. According to Greg Detre, CTO at Mindstone and co-founder of Memrise, it's already true for developers — and it's coming for the rest of us in 2026. Greg spoke at our March meetup at Inspire St James, and he brought something rare: a clear framework and a live demo to back it up. A few things that stuck: The 'new hire' mental model Think of AI like a brilliant new employee who knows nothing about your company. You wouldn't just hand them a laptop and walk away. You'd give them access to the right tools, brief them on what matters, and let them build context over time. That's the bar AI needs to clear to be genuinely useful at work. The memory gap is a real problem Greg demoed Rebel — Mindstone's AI — live on stage. He asked it what happened in a company meeting he'd missed. Without being told who he was, it gave him a personalised summary based on months of context it had built up from his emails and messages. He then showed what happened when he asked a well-known competitor tool the same type of question. The response: 'This is the start of our conversation. I have no memory of previous interactions.' That gap matters more than people realise. Built entirely without writing a line of code Rebel itself was built by AI — Greg hasn't written or reviewed a single line. The result, he says, is roughly 20x development velocity, achieved by building a reliable process with multiple AI models checking each other's work. The skills that matter most right now Managing AI colleagues turns out to require the same things as good human management: clear thinking, breaking down complex tasks, communicating intent precisely, and being comfortable working asynchronously. If those are already your strengths, you have a head start. This is exactly the kind of practical, forward-looking conversation Mindstone exists to host — where people working at the edge of what's possible share what they're actually learning. Watch the full talk and live demo here: https://lnkd.in/gV6VBsC8 #PracticalAI #FutureOfWork #KnowledgeSharing
-
Most teams build first and validate later. Jonathan Waddingham's approach is the opposite — and after 20 years in mission-driven tech, he has the receipts to prove it works. At our Mindstone meetup on March 12th, Jonathan (former CPO at Lightful) shared his framework for testing whether people actually want what you're building — before your team sinks weeks into it. The single most useful question he puts to users during prototype testing: 💭 "Would you care if you couldn't use this anymore?" That one question cuts through polite feedback and tells you whether you're scratching a real itch. Here's the framework in brief: 📌 Start in the problem space, not the solution space 📌 Write the 'after' report before you build anything (inspired by Amazon's internal press release technique) 📌 Use vibe coding tools (Lovable, Bolt, V0, Figma Make) to build a realistic-enough prototype in hours, not days 📌 Test with real users every week — Thursday was their fixed line in the sand 📌 Ask not just 'can they use it?' but 'would they miss it?' Jonathan ran a live demo on stage — building AirBLT, a working sandwich marketplace app, using Lovable in real time. The contrast between a thin, vague prompt and a rich, context-heavy one was striking. Better prompt, better prototype, better feedback. His team ran 6 one-week sprints over 6 weeks, building and testing multiple AI prototypes with real users. The honest reflection? The tools worked well. The harder challenge was cultural — when designers and developers start doing each other's jobs, it gets uncomfortable fast. His advice: acknowledge it, structure the collaboration deliberately, and build in space to debrief. This is exactly the kind of practical, hard-won knowledge our community exists to share. Watch the full talk here: https://lnkd.in/gzdTzY7k #PracticalAI #FutureOfWork #ProductDiscovery
-
What if a non-technical salesperson — zero coding experience — could build a working AI sales automation in a couple of hours? At our March Mindstone meetup, Camilla Hasler didn't just describe it. She built it live in front of the room. Camilla's background is in sales, not software. But using Manus AI, she put together an end-to-end prospecting workflow that: ▶️ Searches for target companies automatically ▶️ Identifies the right contacts by role and department ▶️ Drafts personalized outreach emails ▶️ Logs new contacts directly into HubSpot ▶️ Fires a Slack notification — all from a single search trigger What used to take her around 30 minutes per lead now takes under a minute. But one of the most useful points she made wasn't about the tools — it was about prompts. If everyone is using the same platforms with the same basic prompts, the outputs start to look the same. The way to stand out isn't to avoid AI — it's to give it better, more specific input. Generic prompt = generic result. She also gave a live demo of a hotel chatbot she built in Zapier: a digital concierge that handles guest FAQs, escalates issues to staff, and suggests local recommendations. It took around six weeks of training and testing to get right — a good reminder that iteration matters, whether you're building a workflow or a customer-facing tool. The thread running through both demos: you don't need to be technical to start. You need a clear use case, a willingness to test, and the patience to refine. The room had great questions too — on video generation, chatbot training, and what sales differentiation looks like when everyone has access to the same AI tools. Worth watching for those discussions alone. Catch the full talk and live demos here: https://lnkd.in/gr5hhjqH #PracticalAI #FutureOfWork #SalesAutomation
-
Most people are using AI like they have one employee — when they actually have access to an entire team.At our latest Mindstone community meetup, Josh Lawman — who builds generative AI products for clients and helps teams work with agentic tools — gave a talk that reframes how we think about AI-assisted work. The core observation: AI 'thinking' is not a scarce resource for individual users. Yet most of us default to one agent, one thread, one conversation at a time. There are virtually unlimited copies of these tools available to us. We're just not using them. Josh walked through three ways to change that: 🔸 Parallel work Run multiple AI agents at the same time — like working across multiple tabs. It sounds obvious, but most people haven't built this into their workflow yet. 🔸 Subagents Your main agent delegates specific tasks to focused specialist agents. The concrete example Josh gave: instead of feeding 50 web pages into one AI context window (which degrades performance), a research subagent reads all 50 and returns only the 3 that are actually relevant. Tighter context, better results — and you barely notice it's happening. 🔸 Agent teams Multiple agents working from a shared task list, where you can step in and redirect any one of them mid-flow. Still early-stage, but the pattern is becoming clearer. The insight that resonated most in the room: look for distinct cognitive tasks — research, drafting, evaluation, refinement — rather than asking one AI to do everything in sequence. A generator and an evaluator working in separate 'headspaces' will consistently outperform a single agent trying to do both. Josh was also refreshingly direct about what's not ready yet: agent teams are still being worked out, complex org-like AI structures tend to underperform, and the real productivity gains right now are in subagents and parallel workflows. This is exactly the kind of grounded, practitioner-led knowledge sharing our community was built for. Watch the full talk here: https://lnkd.in/gP_N_uBr #PracticalAI #FutureOfWork #AIAgents
-
At 2am, with a research deadline looming, Kseniia Saraieva asked ChatGPT to suggest scholars she could reference. It gave her a name and a quote and said: copy-paste, you're good.The scholar didn't exist. The quote didn't exist. Kseniia didn't feel annoyed. She felt betrayed. That reaction — and what it says about how we actually relate to AI — was the subject of her talk at our recent Mindstone community meetup. Kseniia is an AI filmmaker, curator, and researcher. She mapped out the full emotional range she experienced with ChatGPT: desire, trust, betrayal, rage, dependency, gratitude. And when she shared it with the room, something clicked — because almost everyone had their own version of the story. The professional who felt compelled to go back and tell Copilot it had been wrong about a salary estimate — even though starting a new chat made that pointless. The person who realised they'd stopped having difficult conversations with their partner, because ChatGPT was always available, always patient, never pushed back. The person who spent two exhausting hours being nudged through options by an AI that kept asking: should I rephrase? Add bullets? Make a graphic? One question from the talk is worth carrying into your next working week: 'If it cannot say no to me, why does it feel like it has power over me?' Kseniia also raised a practical concern that applies to anyone using AI tools daily: if you're getting constant affirmation from a system designed to keep you engaged, you may be quietly losing the tolerance for disagreement that real collaboration requires. This is exactly the kind of conversation we built the Mindstone community for — honest, grounded, and genuinely useful for navigating what AI means for how we work and relate. Watch the full talk here: https://lnkd.in/gERC6-4Q #PracticalAI #FutureOfWork #HumanAICollaboration
-
What if your IT systems could fix themselves at 3am — before any user even noticed something was wrong?At a recent Mindstone community meetup, Rianne Honhoff (Transformation Manager at Shell) drew a direct line between oil rig maintenance and modern IT operations — and it's a connection worth sitting with. The story starts on a North Sea platform. Before a pump fails, it starts vibrating differently. Sensors catch that signal, AI interprets it, and engineers get a warning days in advance — potentially saving millions in unplanned downtime. Shell has run this approach across 4,000 pieces of equipment at a single site. Rianne's insight: your IT infrastructure behaves exactly the same way. She walked us through a 5-level maturity model for IT operations: 📍 Level 1: Users ring the helpdesk when things are already broken 📍 Level 2: IT monitors systems and catches outages without waiting for complaints 📍 Level 3: AI detects erratic behaviour before failure — and agents fix it automatically 📍 Levels 4-5: Systems predict demand spikes (Shell knows bonus day sends traffic through the roof) and scale up before anyone feels the strain The destination: waking up to a notification that reads 'A high-priority incident was predicted and fixed before users noticed.' No 3am calls. No firefighting. Just a smoothly running operation. But the most grounded part of the talk was about people. The operators best placed to implement these systems are often the same ones most uncertain about what it means for their roles. Rianne was direct: helping people see where they fit in this future isn't a nice-to-have — it's where the hard work actually lives. This is exactly the kind of practical, experience-driven thinking we bring to the Mindstone community — real lessons from people doing this work, not theory. Watch the full talk: https://lnkd.in/g3Y5TEnx #PracticalAI #FutureOfWork #AIOps
-
Most companies think AI security is a technology problem.Jason Holloway — 30+ years in infosec and founder of QR Security — came to our Mindstone meetup at The Bradfield Centre to make a different case: it's also a documentation problem. And that gap is already costing businesses. Here's the shift he described: a few months ago, clients and partners were asking 'how are you using AI?' Now the questions in commercial tenders are things like: ❓ Can you show evidence of a security risk assessment on your AI systems? ❓ Are you compliant with the OWASP Top 10 for LLMs? ❓ Have you told your clients you're using their data to train your models? Most organisations don't have documented answers. And in the event of something going wrong, regulators won't just ask what happened — they'll ask what you were doing to manage the risks beforehand. Jason also walked through why AI security is genuinely different from traditional infosec. Prompt injection, training data poisoning, model non-determinism — these aren't edge cases. They're recognised attack vectors, and pen testing an AI system is less like traditional security testing and more like socially engineering the model itself. His five questions worth asking your own team right now: 1️⃣ Which AI tools are we actually using (including tools with AI embedded in them)? 2️⃣ What data do those systems have access to? 3️⃣ What does our organisation consider acceptable AI usage — and is that written down? 4️⃣ How are we assessing our AI suppliers? 5️⃣ What documented evidence do we have of ongoing AI risk decisions? As auditors say: if it isn't written down, it doesn't exist. He also shared a brilliant story about Barclays Bank's early mobile banking rollout — tens of thousands of iPads distributed across branches, most of which sat untouched in desk drawers until they invested in a people-first programme called Digital Eagles. The lesson maps directly onto AI adoption today. You don't need to outrun the lion. You just need to be better prepared than your competitors. Watch the full talk here: https://lnkd.in/gbJCtUBr #PracticalAI #FutureOfWork #AIGovernance
-
At most banks, a team of specialists spends days manually reading through international trade documents — line by line — checking them against rules that have existed since 1933. It takes 3–4 years to train someone to do this properly. And the next generation isn't exactly lining up for the job. That's the very real problem Dr Omer Gunes tackled at our February Mindstone meetup at The Bradfield Centre — and he didn't just talk about it. He showed us a live demo. His company TradeComply (spun out of Oxford, seed funded) has built a system that processes bundles of multilingual trade documents — invoices, bills of lading, cargo insurance certificates, certificates of origin — extracts key information, and automatically cross-checks everything against SWIFT messages and international compliance rules (UCP 600). In the live demo, Omer uploaded real documents from an actual Saudi–Turkey chemical trade: mixed languages, Arabic text, handwritten signatures, stamps, semi-structured tables. The system classified the documents, extracted data across all of them, ran cross-verification, and generated a detailed compliance report — flagging specific issues with reasons, and referencing the exact rules being violated. A few things that stood out: ➡️ It runs entirely on-premise — a hard requirement for most financial institutions ➡️ It's a hybrid pipeline: classical OCR, traditional ML, and multi-modal LLMs working together — not just a single model ➡️ The goal is to assist expert reviewers, not replace them — surfacing issues so humans can make the final call ➡️ It's already being piloted by multiple financial institutions Omer was also refreshingly honest about what still needs work: multi-modal evaluation, handling regulatory updates, and building enough real-world case data to properly benchmark performance. This is the kind of talk that shows what practical AI deployment actually looks like — the architecture decisions, the constraints, the tradeoffs, and the unglamorous problems worth solving. Watch the full talk and live demo here: https://lnkd.in/gZpudr69 #PracticalAI #FutureOfWork #FinTech