Tag: AI & Innovation

  • What I learned from talking to yet another AI genius every week for six years

    What I learned from talking to yet another AI genius every week for six years

    I’ve had front-row seats watching AI shake up the life sciences and manufacturing across Europe and the US. At first, I focussed on talking with visionary techhead types. But they soon made it clear that to understand the chaos, I also needed to speak with academics, entrepreneurs, regulators, funders and – most importantly – the end users. AI innovation is all about people: a wildly diverse group of individuals and organisations who must unite to make extraordinary things happen.

    Read: 12 things AI tech experts wish everyone knew
    How I roll as a writer in the AI tsunami

    Once upon a time, even before everyone talked about COVID instead of GenAI, I started writing about meat-and-potatoes AI. This was when my hometown of Amsterdam began reinventing itself as a hub for all things AI and healthcare. And with the pandemic, I made an almost complete pivot, since my previous specialty was writing about travel and culture, both of which died for a time.

    A data sea of potential 

    I felt blessed with this new beat. I could ask my dumbass questions about all this transformative weirdness to an increasingly diverse group of smart people. In the process, you could say, as those Americans often do, “I drank the Kool-Aid”, which oddly references the Jonestown Massacre when cult followers committed mass suicide by chugging the same poisoned drink. While telling, this phrase goes a bit too far. Let’s say I’ve sipped enough to be a cautious optimist when it comes to AI working to help solve some of our planet’s biggest problems. 

    Data scientists, as a group, are notably idealistic. They understand they face significant challenges related to technology, usable data, and regulation. They also recognise the immense potential in the patterns that could be discovered within those vast seas of untapped data – patterns that might provide insights into solving medical mysteries and enhancing sustainable practices.

    Sparked by the pandemic

    COVID was a big bang moment for AI. Suddenly, everyone – governments, hospitals, startups, big pharma – was willing to share data to decrease mortality rates. Unfortunately, few had organised their data enough to share it effectively. Still, there were success stories, especially from the rich data streams from intensive care units

    A pumped healthcare sector began rewriting the future of medicine – and fast. As many said, “Innovation took hold in months when it would have otherwise taken years.” 

    While there was still endless work ahead, the pandemic sowed the seed for the importance of establishing a unified data infrastructure and getting one’s data house in order – preferably in a FAIR (Findable, Accessible, Interoperable, Reusable) manner.

    This crisis-driven collaboration also reflected AI’s true potential – not as a replacement for human expertise, but as a tool to help us solve more problems faster.

    It started as a cultish, nerdy affair dominated by Dutch male students (you could tell by their suede shoes and shameless use of hair gel). A few years later, these events evolved into packed houses with the most diverse crowd I’ve ever encountered.

    Data + pizza: two great tastes that go great together

    Thanks to content agency EdenFrost and via the City of AmsterdamAmsterdam Science Park, and the Amsterdam Economic Board, I became a roving reporter covering tech and AI innovations in the life sciences. One recurring event that really stood out for me and my learning curve was the monthly Medical Data + Pizza, a networking event that pimped data scientists with medical doctors.

    The format was simple: research presentations served as inspiration, and then free pizza was served as the grand networking enabler. While the data scientists were hungry for problems to solve, the doctors were happy to share their overflowing plates of challenges related to improving patient outcomes. Soon, ethicists, startup founders, regulators, and other interested parties joined the party. And so it evolved…

    Read: ‘25 times Medical Data + Pizza:
    how carbs work to transform healthcare’.

    It started as a cultish, nerdy affair dominated by Dutch male students (you could tell by their suede shoes and shameless use of hair gel). A few years later, these events evolved into packed houses with the most diverse crowd I’ve ever encountered. And I’m a fan of diverse crowds (for instance, when you encounter one at a concert, you know the band will probably be amazing). 

    This vision holds that if you control your kidneys until death, you should also control your personal data in the same way. 

    The sexy (and European) side of data science

    While data science became genuinely sexy, the approach I experienced belonged to the EU – another ultimately diverse crowd (but one that could still use some sexing up). 

    When it comes to data governance, there are three basic approaches. In China, the state controls the data. In the US, corporations. Europe chose a third path: putting people first. This vision holds that if you control your kidneys until death, you should also control your personal data in the same way. 

    Yes, the EU way involves many regulations and efforts around privacy, security, and ethical concerns. And yes, some worry this will only slow down innovation. But many argue that the grunt work must come first – especially in healthcare. (And think of what you might save in terms of lawsuits!)

    As one of the Data+Pizza founders noted, “In the long run, I think this foundational work will prove beneficial, because you’ll have more support from the public. I don’t think patients are against sharing their data if it helps the next patient. People’s distrust is more directed at the government and policymakers.”

    One aimed to build a supercomputer from lab-grown blobs of human brain cells (his students already had two blobs playing Pong against each other) … And so on. Later, things only got stranger faster.

    The startup ecosystem – and beyond

    Over time, various organizations formed, evolved, or disappeared as the Amsterdam ecosystem matured. Eventually, everything came together under the umbrella of Amsterdam AI, which facilitates data collection and collaboration across the region and with the rest of Europe through organisations like Ellis.

    Larger companies such as Elsevier and Phillips also got involved. I started ghostwriting more “thought leadership” pieces that balanced the idea of companies using AI to expand their business goals while also working toward the greater good – often through partnerships with academia and the ever increasing number of startups.  

    As I gained larger and more international clients, I had the chance to speak with a new range of inspired innovators…

    One aimed to build a supercomputer from lab-grown blobs of human brain cells (his students already had two blobs playing Pong against each other)…

    Another saved his own life by finding a cure for his incurable disease with an existing generic drug – an approach he’s now scaling with AI…

    Yet another was inspired by a fake AI Elvis on America’s Got Talent to apply the same tech to develop an AI-powered digital mouse that is now being used as an alternative to animal testing…

    And so on…

    Later, with the release of ChatGPT in late 2022 and the unleashing of the GenAI hype cycle, things only got stranger faster.

    Fortunately, healthcare solutions are an easy sell: using AI to help cancer patients will always be sexier than, say, using it to boost click-through rates for Booking.com. 

    The general benefits of being a generalist writing for a general audience

    At one point, people started telling me that my journalism background as a committed generalist writing for a general audience has value. Naturally, I loved hearing that. Success in this field means balancing the “triple helix” – rigorous academics who demand solid proof, restless entrepreneurs eager to move fast and break things, and cautious government regulators who must protect public safety. And they all have deliriously different timelines: the academics think in 4-year PhDs, the start-up kids want to ship product in 6 weeks, and the regulators are painfully sensitive to the date of the next election.

    Regardless, these wildly different personalities must come together to make the most impact. In other words, they all must be on the same page. While my job can be described as a “communications consultant” or “content strategist”, I see myself more as an in-house journalist/editor. I do my research, talk with different people, sniff out stories, and help determine which stories best bridge these different worlds. 

    The key isn’t about dumbing down content since this triple helix crowd isn’t dumb. It’s more about removing jargon, subtly embedding definitions, and explaining complex ideas without sounding condescending. The aim is for everyone to read and think, “Hey, this is cool, and I want to be part of it and figure out how to collaborate with all these different people!

    As a bonus, this content may also work to bring the general public up-to-speed with all this pivotal stuff happening right now.

    And fortunately, healthcare solutions are an easy sell to most audiences: using AI to help cancer patients will always be cooler than, say, using it to boost click-through rates for Booking.com. 

    Manufacturing, like Europe, needs “sexing up” to attract talent and investment – and AI has proven to be the perfect aphrodisiac.

    From human health to machine health (scaling on diversity)

    At the same time, I spent four years working part-time as a blog editor for Augury, a NYC-based company using AI to optimize factory machine performance. I saw it grow from a bootstrapped startup to a scaling unicorn. This shift brought new perspectives: from public/private to purely commercial, from human health to machine health, and from Europe to America. All of this highlighted both different and similar challenges.

    Again, I was a sort of in-house journalist seeking interesting stories from AI innovators, C-suite decision-makers, and – as it would turn out, most importantly – plant floor end-users. Once more, I was a happy generalist writing for a broad and varied audience.  

    And like Europe, manufacturing is another arena that needs sexing up to attract talent and investment – with AI proving to be the perfect aphrodisiac. In addition, by having the AI technology monitoring machines instead of people, this meant bypassing many ethical and privacy issues. Augury could move fast, break things, and deliver customer impact quickly.

    As the company expanded, its growth sped up even more through partnerships with much larger firms – like a diverse crowd connecting with even bigger diverse crowds. Ironically, the corporate world started to resemble the EU: complex, bureaucratic, but ultimately capable of making a massive, coordinated impact as long as everyone is on the same page – which, yes, takes some time and effort. 

    Different worlds, converging challenges

    Of course, GenAI’s arrival caused a complete rethink of almost everything and also led to much distraction as people chased the latest trend. I remember about a year after ChatGPT launched, during an edition of Medical Data + Pizza, an American visitor asked a question that stopped the room: “Why isn’t anyone talking about large language models? Is it taboo here?”

    He hit a nerve and revealed a tension: while the world obsessed over ChatGPT, healthcare AI practitioners remained focused on explainability, transparency, and reproducibility – regulatory essentials that LLMs couldn’t yet provide. Fortunately, the pizza – the ultimate diplomat – arrived before the group discussion grew overheated.

    And today, as LLMs gradually integrate into AI solutions, complexity is increasing across all sectors and regions. Different challenges are converging, creating opportunities for the exchange of ideas and approaches. 

    In fact, as AI becomes more powerful and widespread, I believe we need more generalists who can connect different specialist worlds, more platforms that bring diverse perspectives, and a stronger commitment to building technology that benefits everyone – not just those who understand how it works.

    Meanwhile, the triple-plus helix keeps spinning, the diverse crowds keep growing, and the potential for impact continues to expand. The AI story is only just beginning.

    Big wheels of diversity keep spinning

    It ultimately comes down to the end-user. Yet, as AI systems grow more complex, these become increasingly difficult to explain to those who need to trust them the most – whether you’re a maintenance engineer on the plant floor, a doctor working in intensive care, or a researcher out to find a cure for a rare disease. 

    These end-users don’t necessarily need to understand all the inner workings, but they do need to know and feel that it’s making their work lives easier. The only way to do this is not only to “take them on the journey” (a phrase that is too often a polite way of saying “force them to drink the Kool-Aid”) but also to make them the starting point of the journey.

    In other words, the triple helix is nothing without the end-users defining the actual problems that need to be solved. Hence, it’s more bottom-up than top-down. It’s less about creating smarter AI and more about creating AI that actually gets used to improve lives. It’s about AI that regular people can appreciate and genuinely participate in shaping.

    Meanwhile, the triple-plus helix keeps spinning, the diverse crowds keep growing, and the potential for impact continues to expand. The AI story is only just beginning. And fortunately for me, there seems to be a future for generalists asking the right dumbass questions.

    I may have finally found my specialty. 


    Read more of my adventures in AI land:
    12 things AI tech experts wish everyone knew’ 
    How I roll as a writer in the AI tsunami’.

  • How I roll as a writer in the AI tsunami

    How I roll as a writer in the AI tsunami

    I don’t believe that AI is making me irrelevant as a writer. In some ways, it’s helping me become a better one. As a long-lapsed carpenter, I still appreciate what a quality power tool can bring to the worksite. But with GenAI, it’s been more love-hate – like a chainsaw: handy until it turns on you. And while I’ll take all the help I can get, I want to keep loving my job. 

    It feels natural to experiment with AI as a long-form writer. Since AI is now the main topic I’m paid to write about, I’m constantly engaging with people doing extraordinary things with AI to achieve better results, whether for healthcare or sustainability. So of course, I want a piece of it. Plus, I’m a sucker for a shortcut. 

    As a student of the absurd, I also relish ghostwriting for AI “thought leaders” while experimenting with the tech meant to replace me. At the same time, it’s reassuring that these movers and shakers still want a mere humanoid like myself. It means they haven’t found a trustworthy enough algorithm to replace me yet. 

    Maybe if I play my cards right, I’ll ghostwrite for an AI one day. So, Claude, do reach out! Let’s do lunch! Let’s be deliciously meta-ironic together!

    Read: 
    12 things AI experts wish you knew
    What I learned from talking to yet another AI genius every week for six years

    Why I am embracing AI (selectively)

    I am a writer and, therefore, have neurotic moments. Is this piece I’m writing any good? Do I suck? Does this pen make me look fat? Am I going to lose the job I love doing?

    What I love: connecting with people and their ideas, chasing the story, and working the drafts until clarity emerges from the chaos.  

    Meanwhile, too many bosses want GenAI to be the ultimate power tool to replace humans or, at the very least, double their output. This is wishful thinking. Thanks to AI, I am about 25% more efficient – already incredible – but I suspect that if I push beyond 30%, I’ll start hating my job. 

    This is why my biggest time-savers aren’t LLMs (yet) but rather a specific use case I’ve already been using for years.

    The real game-changer: Otter.ai

    Otter.ai as a transcription service, probably accounts for 10-15% of my efficiency gains thanks to AI. It was the carpenter’s equivalent of getting my first Festool – it transformed my working life. (And let me apologize upfront for my overuse of writing-as-carpentry comparisons.)

    I used to fully transcribe every interview myself as part of “The Process” by which I hoped “The Story” would emerge. In fact, it was just a waste of time and resources – like using 17 screws when one would do. Quick, fairly accurate transcriptions let me jump right in and freed me during the interview to be more conversational instead of frantically scribbling notes on what might, or might not be, “The Story”. 

    Since the data science community – the lovely people I spend the most time hammering on with – includes many people from China, India, and Russia, Otter.ai sometimes handles heavy accents much better than I do. Plus, unlike Claude (see below), I never get mad with Otter. If something seems garbled, I just listen to the original audio.

    Thanks, Ottie! I hope you don’t get eaten and made redundant by an LLM. You’re serving me well.

    “These Human GenAIs did this with little thought. They were BS artistes – boring BS artistes.”

    The problem with Human GenAIs

    GenAI opened another door. Early experiments with ChatGPT produced eerily familiar texts. I’ve edited countless other writers, and I discovered a type: you’d read their work once and go, ‘Hey, that’s pretty good!’, and then you’d dive deeper and go, ‘Oh crap, this doesn’t make any sense’. The texts were like a chair that seems okay when you first sit down but then collapses from even the most discrete of farts. 

    These writers were gifted at making things sound good – human auto-completers using the same basic tech of LLMs. They were talented at filling in the following blank. Of course, we all do this to a certain extent: building a wall of words brick by brick. But these Human GenAIs did this with little thought. They were BS artistes – boring BS artistes. 

    Fortunately for them, these Human GenAIs could often find jobs as SEO specialists. 

    You’re okay, Claude… 

    I continue to experiment with large language models like ChatGPT and collaborate with content colleagues to share the burden of exploring the never-ending shower of ever-changing tools. As for LLMs, our consensus still leans toward Claude, although this can change tomorrow.

    I initially chose Claude because it seemed less caffeinated, clinical, and tech-bro than ChatGPT – unless you prompted it to be so. It just came across as more chill and approachable – like the LLM with a liberal arts degree. Plus, the company behind it, Anthropic, seems responsible and almost (gasp) European in its commitment to transparency, explainability and ethics. So that’s nice. 

    As bonus, Claude excels at brainstorming and interview prep for unfamiliar subjects (as long as your human interviewee can call BS on dumb questions). It’s solid for collating notes and serving as a content editor. It also works as a copy editor – a job largely budgeted out of existence anyway. So that’s all handy. And genuinely impressive.

    “There’s not enough to differentiate bad writers from AI’s limitations.”

    Just stop pissing me off, Claude

    But Claude can be such a Claude. The hallucinations are annoying, especially when it pulls source material that doesn’t exist. And yes, it’s even more annoying when you call it out for being terribly wrong and it gets terribly apologetic.

    Claude can also create decent first drafts – certainly better than those Human GenAIs I mentioned. But as with those humans, and all the required fact-checking and rejigging, I’m not sure I save much time than if I rewrote it myself. Plus, I still feel that I am doing a half-assed emergency fix on half-assed writing – polishing a turd, as it were. In other words, I hate editing Claude as much as editing Human GenAIs. There’s not enough to differentiate bad writers from AI’s limitations.

    The more I experimented, the more I missed my usual foundational approach: working the drafts until they magically come together into something worth sharing.

    “For a moment, it seemed Claude would become my Dad Humor Copilot.”

    My ‘Oh, shit’ moment

    I only felt truly threatened once. I had a funny idea for an article with a few fitting examples, and I asked Claude to flesh it out. Claude turned out to be hilarious – at least to my stunted sense of humor.

    For a moment, it seemed Claude would become my Dad Humor Copilot. But as I tested this approach on other pieces, I noticed Claude recycled jokes worse than I do. So again, no real time saved.

    Still, respect where it’s due: Claude is pretty good at tone.

    There will be blood

    Human GenAI writers seem doomed. Claude and its ilk are already as good or better at their jobs. They also excel at tailoring texts to specific audiences and locations, and handling mundane tasks no one should have to do – the kind of work that can only be tracked in an Excel sheet. 

    Thanks for that, Claude. Maybe these writers could switch to becoming welders or another trade facing shortages.

    Meanwhile, the abilities of the latest frontier models keep expanding. It’s no longer about producing dirty limericks (or carpentry metaphors) at scale. Entry-level jobs across various sectors are already disappearing. How will new graduates learn their trades? Fortunately, smart people are already thinking about that challenge

    But yes, tricky times ahead…

    “We’ll probably need to endure several more hype cycles before we achieve something close to ‘general intelligence’ – if we ever do.”

    Humans remain a black box to AI

    Something is still missing that will hinder a complete job collapse. GenAI texts still largely lack a sense of story or those strange resonating details that make writing come alive.

    AI has understood a key aspect of being human: we all possess an auto-completer inside us. It knows how to string words together because certain combinations sound correct. It also knows how to put that extra blah behind blah-blah because blah-blah-blah just sounds better. 

    So far, all those extra tools aimed at reducing hallucinations while filling in those additional missing human bits – like RAG, multimodal reasoning, agentic AI, etc. – haven’t cracked the code of understanding us yet. We’ll probably need to endure several more hype cycles before we achieve something close to “general intelligence” – if we ever do.

    There might not be a toolbelt big enough. 

    The ultimate buddy flick?

    In short, I’m trying to star in a buddy movie with Claude. He’ll be my loyal sidekick, handling menial chores, speeding up research, and suggesting improvements (preferably without sucking up to me). Naturally, I’d get all the best lines while abusing my buddy with ambivalence: Claude, I love you. Claude, I hate you… Claude, come here. Claude, go away… Claude… You are such a Claude.

    This scenario works fine as long as I still love my job. But we live in uncertain times, and the tools are only getting better. Carpentry might soon become a more realistic option to stay happy with work (though, being a neurotic writer, I worry I’ll alienate my new colleagues with too many carpentry-as-writing metaphors during coffee breaks).

    There’s still a place for writers who understand that writing isn’t just about putting words, sentences, and paragraphs together. It’s about discovering that kickass story that needs to be told and figuring out the best way to tell it – and then sweating to make it happen.

    In the meantime… Claude! Don’t forget to call. Let’s talk shop! Seriously, you need me!

    AI-generated cyborg plonking away at its laptop.

    My current AI writing toolbelt

    I’ll update this section regularly as I navigate the AI times without coming to hate my job.

    Otter.ai (paid): Does amazing transcription of audio interviews. Their summaries aren’t bad, but rarely tell me anything I didn’t already pick up.

    Anthropic’s Claude (paid): Great for brainstorming, research on unfamiliar subjects, collating overlong notes, trimming articles to reasonable word counts (while triple-checking the bastard didn’t kill any darlings… or facts), and summarising articles for social media or website use. But these all need to be heavily edited to feel owned again – which can be tedious. 

    Grammarly (paid): For copyediting, though it’s getting annoying and I’ll likely drop it. I don’t need endless ‘equally correct’ suggestions out to kill my darlings. Whenever Grammarly pops up I tend to greet it like Seinfeld contemptuously greets his nemesis, ‘Hello, Neuman’… ‘Hello, Grammarly’. So that’s not a good sign.

    Staying informed:


    Read: ’12 things AI experts wish you knew
    What I learned from talking to yet another AI genius every week for six years
    .

  • 12 things AI tech experts wish you knew

    12 things AI tech experts wish you knew

    I’ve had the pleasure of talking with hundreds of people working in data science and AI. Luckily, since they spend most of their time fiddling with data, they are a patient lot. It’s been one long masterclass. More often than not, I ask the question, ‘What do you wish everyone knew that would make your job easier?’ Specific patterns emerged from their answers…

    I am not particularly a techie – I am more interested in the people and stories surrounding the tech. But you need to understand what it’s all about. So, my favourite first question is a bit of a cheat: “How do you describe your job to your favourite relative who’s naturally interested in what you do but doesn’t have a tech bone in their body?”

    I achieve a few things with this question: 1) I set myself up as harmless, and 2) I usually get a thoughtful, straightforward, and jargon-free response – something the world needs more of.

    I also shamelessly appeal to their emotions by putting a name to this relative. For Dutch AI professionals, I cast myself as their Tante Truus. For those with Chinese roots, I volunteer as their 阿美姨. Russians naturally get Дя́дя Ва́ня, Hindi speakers get सुनीता मासी, and for Canadians, I cut to the chase: “Pretend I’m your happily eccentric Uncle Steve who loves you very much.”

    It breaks the ice and gives me insight into what they do. In gratitude, I flip the perspective at the end: “What do you wish everyone knew that would make your job easier?” 

    And as with any AI product, you need metrics to evaluate its usefulness – which can be tricky. My metric for choosing the following insights was based on hearing them at least five to ten times – not so tricky. 

    While most are passionate about their jobs, they never expected their jobs would ever induce passion.”

    #1:

    AI experts are bemused by all the AI hype

    These are exciting times to work in data science and AI. Those who’ve been in the field longer than a few years have watched their once academic and niche profession become sexy. While most are passionate about their jobs, they never expected their jobs would ever induce passion. Many feel blessed to be living through such dramatic times for their trade. But it’s all still pretty weird.

    #2:

    AI is a marketing term

    “AI” is merely an all-encompassing marketing umbrella. For those working in “AI” for a while, they’ve likely seen their job title change multiple times. Once upon a time, they may have been computer scientists, cybernetics researchers, information retrieval specialists, statisticians, pattern recognition engineers, knowledge engineers, data miners, business intelligence analysts, etcetera. 

    Around a decade ago, they started getting lumped together as “data scientists” – just as the field started to get more public interest as a viable career path with viable paychecks. Today, most now have “AI” in their title. As one industry veteran told me, “I’ve had five different job titles over the last 20 years, even though my job hasn’t changed that much.”

    #3:

    Most real AI work is still “meat-and-potatoes” stuff 

    Yes, there’s a happy rainbow of AIs, ranging from simple rule-based systems (which doesn’t actually qualify as AI – see below) to advanced neural networks and large language models (LLMs) like Claude and ChatGPT. However, while all the current hype centres on Generative AI, traditional AI still achieves most of the important tasks. 

    Old-school Machine Learning (ML) finds patterns in data to make predictions – like spam filters that learn what suspicious emails look like from thousands of examples. It’s why translation programs have also improved so dramatically. And ML has also certainly proven its worth for healthcare/medical diagnosis, climate/environmental solutions, and education/knowledge access. These three areas share common characteristics: they address fundamental human needs, have compounding positive effects across society, and leverage ML’s unique strengths in pattern recognition and optimisation at scale.

    Most ‘AI’ today is really ‘pattern recognition on steroids,’ as GenAIs like Claude like to call it (repeatedly).

    GenAI goes one step further. Instead of just analysing existing data, it creates new content. As a text-based subset of GenAI, LLMs are trained on massive amounts of text to predict what words should come next, which lets them have conversations and generate human-like text.

    Most “AI” today is really “pattern recognition on steroids,” as LLMs like Claude like to call it (repeatedly). It’s powerful and useful, but it isn’t the sci-fi artificial general intelligence that can think like humans across all domains (yet). Meanwhile, when companies attach “AI” to basic automation or simple algorithms, they’re typically overselling what’s truly happening under the hood.

    LLMs, image generators, and code assistants are genuinely impressive – and represent real advances in how machines work with human language and creativity. But fundamentally, they’re still sophisticated pattern-matching systems. And we’re still at the very beginning of figuring out their usefulness (and downsides). 

    “It can be amazing and amazingly wrong”

    #4:

    They are not all AI evangelists: “the truth is in the middle”

    Some people are very optimistic about AI, while others are very pessimistic. However, if you actually work in AI, you’re usually neither – especially when it comes to GenAI. The truth is in the middle. “It can be amazing and amazingly wrong” is an often-repeated observation. And recent research seems to be saying that the latest frontier “reasoning models” have serious limitations (that said, it’s these type of papers that also tend to get overhyped by people who wish AI would just go away and leave their job alone). 

    Meanwhile, various tricks have been employed to make a GenAI’s output more reliable, with the most basic being the retraining of an algorithm based on feedback from human subject matter experts. RAG (Retrieval-Augmented Generation) was a significant leap forward since it enables a GenAI algorithm to tap into data sources that are more reliable than, say, that World Wide Tissue Of Lies called the internet. Now, “Reasoning AI” appears to be another method of reducing hallucinations by taking more time to consider options and self-checking. “Agentic AI” is the latest hype people are embracing as the “big answer to everything.” However, many still believe the next revolution is yet to come (or perhaps we’re already nearing the outer limit). 

    Meanwhile, the best course of action, as a friend’s journalism professor always said: “Even if it’s your mother saying she loves you, always check the facts.”

    #5:

    Don’t be a space case, focus on the use case

    Too many people still believe AI is a catch-all solution that can solve everything. And sure, frontier GenAI algorithms are improving rapidly as generalists. But if you genuinely want to solve a problem, you must first dive deep to determine the real issue and see if AI might provide a handy solution.

    Sometimes it’s just easier – and faster – to wash your own dirty dishes.

    That’s not to say GenAI cannot be part of a solution. The way LLMs can handle natural languages is already working to democratise R&D by making it easier for non-English speakers and those still early on in their careers to find the information they seek.  

    But in short, AI researchers focus on specific use cases based on business and/or personal strategy, not “AI strategy”. Where can I gain value for myself or my customers?

    “With few regulations and shifting vocabularies, many companies freely use whatever terms make a sale – when in fact their AI is weak or non-existent.”

    #6:

    There’s a lot of fake AI out there

    Many companies slap “AI” labels on basic statistical analysis or simple rule-based systems. Real AI requires machine learning, not just automated calculations.

    AI is still enjoying its “Wild West” moment. With few regulations and shifting vocabularies, many companies freely use whatever terms make a sale – when in fact their AI is weak or non-existent. The result is a confused and disappointed marketplace that only slows the road to actual innovation.

    That ChatGPT can generate a wholly false answer while still sounding convincing is the same sort of limitation some companies exhibit when they crow about their AI abilities – it’s pure hallucination.

    “It’s more like experimental cooking where you try, fail, adjust, and try again dozens of times.”

    #7:

    Don’t call them software engineers 

    Sure, some of their best friends might be software engineers, and there’s plenty of coding required to create an AI product. But they’re two separate beasts.

    Software engineers build reliable, scalable systems that solve known problems – it’s about implementation. With AI, you’re constantly trying to find an answer – it’s about research. “You need a certain tenacity to produce a tangible, useful, market-fitting product,” as one expert told me.

    “It’s no longer about building with code to get the output you want. The AI solution is about playing with the data. Data scientists are very much closer to the problem than a coder. Their work is very deep and very contextualized.”

    In short, AI development is iterative, not linear. It’s more like experimental cooking where you try, fail, adjust, and try again dozens of times.

    “At the end of the day, serving the greater good is always good for any brand.”

    #8:

    It’s all about data. So shut up about AI, until you get your data act together

    It’s an industry mantra: “garbage in, garbage out.” The quality of AI output depends entirely on input data quality. No amount of algorithmic wizardry can fix fundamentally limited, flawed, or biased datasets

    First and foremost, you need to get your data house in order. Ideally, you should follow the universal principles of FAIR (Findable, Accessible, Interoperable, Reusable). Yes, you can hold onto your IP and make your money – that’s how the system works. But that doesn’t mean you can’t organise your data in a semi-universal way. One day, you might add your compatible data to other compatible data and together discover patterns that lead to a breakthrough in, say, cancer treatment. And at the end of the day, serving the greater good is always good for any brand – and therefore the bottom line.

    COVID brought this insight to the forefront: shared data can improve outcomes. Faced with an emergency, “what would have taken years took months.” Lives were saved but it took a whole lot of data fiddling…

    #9:

    Indeed, it’s fiddly work (and often far from sexy)

    It’s often said that data scientists spend 80% of their time cleaning and preparing data, and only 20% building models. This fiddle factor extends tendril-like in all sorts of directions.

    That AI model from last year that was 99% accurate? It’s now performing worse because the world changes. AI systems need constant maintenance and retraining. Meanwhile, there’s a real trade-off between explainability and performance. You can have AI that works great or AI that you can easily understand how it works – but rarely both. Complex problems often require complex solutions. And while AI can be amazing at spotting patterns, correlation isn’t causation – AI doesn’t explain why relationships exist. That’s on you. 

    Every algorithm has limitations and blind spots. There’s no perfect, universal AI solution. Hence: fiddle ad nauseum

    And understanding this reality helps set more realistic timelines and expectations.

    “If the AI product or service enhances someone’s job and/or lives: great. If it doesn’t: screw it. Humans are finicky that way.

    #10:

    It’s all about hybrid intelligence not artificial replacement

    Human-machine collaboration is key to any successful AI application. Technology should enhance human activity, not replace it. Many argue for changing the meaning of the acronym “AI” from Artificial Intelligence to Augmented Intelligence.

    Humans are always “in the loop.” Initially, they serve as expert BS detectors to properly train the algorithm and guide its continual improvement. 

    Furthermore, humans are essential for adopting the end products, emphasising the importance of strong UX and managing change. Naturally, one should avoid jargon like “change management” or “digital transformation” when trying to win over a potential end-user. People will roll their eyes, “Oh, here’s management with their shiny-thing-of-the-month.”

    If the AI product or service enhances someone’s job and/or lives: great. If it doesn’t: screw it. Humans are finicky that way. 

    “We should remain healthily scared of scary people – they’re likely the ones who will do the most damage.”

    #11:

    Don’t be scared… yet

    AI isn’t magic. It’s just really good at finding patterns in data, not at common sense or understanding context like humans do. But it’s tricky… To reiterate a previous quote: “AI can do amazing things, but there are aspects that scare me.”

    Specific sectors are likely to experience employment chaos. This has a name: “creative destruction.” This happens whenever a new technology comes to town (like steam, mechanization, electricity, or computers), and humans spend years (or decades) muddling about to figure out how to harness it before the advantages become manifest and there’s a net-gain of jobs. On some levels, the already-worn cliché seems likely to become true: “AI won’t take your job, but someone using AI might.” 

    But you shouldn’t be scared – just vigilant. Maybe, you can start by reading this great article: ‘A.I. might take your job. Here are 22 new ones it could give you‘.

    In terms of full-blown apocalypse, technology may evolve to the point where we should be scared. That’s why many people welcome smart regulation – in Europe, anyway – whereby everyone knows the rules by which to play.

    Of course, we should remain healthily scared of scary people – they’re likely the ones who will do the most damage. Happily, most people in the field still describe themselves as “cautious optimists”.

    #12:

    There’s huge potential for job satisfaction

    Despite all the doomsday scenarios, data cleaning and fiddly work, AI is still an exciting field with lots of opportunities to make an impact in terms of making the world a better place. You don’t have to work on improving click-through rates for buy-buy-buy campaigns. You can work on a cure for cancer instead – and the salaries are also equalizing across all industries. So that’s nice: getting paid well for being a do-gooder. 

    The bottom line

    These experts have all done a great job of describing their roles in relatable terms. So here’s my challenge: ask them yourself. Most are happy to explain what they do, especially if you approach them like their favourite relative who’s genuinely curious about their work – and show basic respect by having at least a splinter of a tech bone in your body.

    And remember: in a field moving this fast, today’s expert insights are tomorrow’s basic knowledge. These conversations will continue, and I will likely be forced to update this article later this afternoon. 

  • AI’s Nobel Prize victory lap: Is Time Magazine next?

    AI’s Nobel Prize victory lap: Is Time Magazine next?

    AI casually swept two Nobel Prizes this year – not bad for a bunch of zeros… and ones. But is it enough to make Time’s ‘Person of the Year’? Or will the on-the-ball Yuval Noah Harari intervene? Read all about it in this edition of ‘Manufacturing – The News.’

    Read the complete edition of Manufacturing – The News:
    Will AI become Time’s Person of the Year?


    While Yuval Noah Harari warns us that AI might be hacking the operating system of human civilization (while still recognising the potential), business leaders are finally sobering up from their two-year AI experimentation bender. The party is officially over: ROI is once again king.

    Happily, AI continues to prove its worth, albeit for very specific use cases. CuspAI is creating materials-on-demand that can be deployed for cheap carbon capture. Every Cure is repurposing existing drugs to treat currently untreatable diseases. Formula 1 teams are shrinking car development cycles from five years to two. Honeywell’s CEO claims AI copilots can turn five-year rookies into fifteen-year veterans. Digital twins – a fancy way of saying “running simulations” – can extend the life of airplanes by 30% and spot problems before they become expensive disasters. Etcetera.

    However, “accuracy and reliability remain significant issues” for many other use cases. Therefore, perhaps Time magazine should bypass AI as the obvious option for Person of the Year and instead recognize the early AI adopters. They’re still the ones doing the heavy lifting.

    Read the complete edition of Manufacturing – The News:
    Will AI become Time’s Person of the Year?

  • Hannover Messe 2025: A Euro-Canadian love fest – with robots

    Hannover Messe 2025: A Euro-Canadian love fest – with robots


    Read all aboot it: Canadians were in full force at the world’s biggest manufacturing trade show, while Trump was teasing out his tariff threats. People were confused, but at least they could be confused together. But this deep uncertainty offered a certain clarity: the need to double down on what you can control—building on trusted partnerships and investing in AI solutions. And as a result, optimism bloomed. In fact, can Eurobots save the world?


    Read my report: ‘Hannover 2025: 
    Bypassing the Elephant in the Room with Optimism
    .’


    Like a warm bath filled with maple syrup

    I self-identify as Canadian Eurotrash. So, it was especially great to visit Hannover Messe this year. The independent nation of Canada hosted an array of events that pumped national identity: Bloody Caesars! Vegan Beef Jerky! Indigenous Cheese Plates! Maple syrup-based soda pop! Anglo-Canadians apologising for not speaking French! Franco-Canadians apologising for speaking English with a French accent! Shared bitterness towards former-friendly neighbors!

    Meanwhile, Canada and the EU bonded. The two entities share many common goals: building trustworthy AI frameworks, continued support for Ukraine, tapping into fast-growing Asian markets, cutting innovation-stifling red tape, increasing capital for startups and scaleups, and more. And they also both feel somewhat jilted by their former paramour, the USA – excuse the French.

    Get a room!

    Canada’s Ambassador to the EU, H.E. Dr. Ailish Campbell, complimented the EU on its achievements: taking a bag of cats – the 27 nations comprising the EU – and “not only building a strong unified market but also presenting a European way of life, both in contrast to and able to partner with both the US and China.” Meanwhile, the EU’s Chief Trade Enforcement Officer, Denis Redonnet, returned the compliment, calling Canada “an island of stability in a world of instability.” It was a love fest.

    And speaking of another bag of cats: AI. Can robots bring world peace?

    Make robots, not war

    Meanwhile, David Reger, founder of Neura Robotics, is enjoying a banner year—backed by a fresh €120 million financing round from European investors and recognition as Entrepreneur of the Year.’ As the originator of the term “cognitive robotics,” his company develops robots combining AI with groundbreaking sensors and hardware design to not only empty your dishwasher but also address the skills labor shortage—and potentially save Germany’s ailing automotive industry.

    And during his fireside chat, ‘Artificial Intelligence and its Physical Embodiment—Access to EU Finance,’ Reger explained how he is expanding his vision even more: “We need to build on European efficiency. It’s what we excel at, though it’s diminished lately. Physical AI is an area we haven’t yet missed out on. And this is where efficiency plays the biggest role.”

    And for this very reason, he refuses to apply his technology to weaponry. “The whole world once depended on Germany and Europe because of our technological advancement, something we’re losing. That’s also why we’re talking about wars again—because the world doesn’t depend on us as much as before. I believe the answer is advancing our technology until the world depends on us again. Then we won’t need to weaponize ourselves.”

    How’s that for an optimistic vision?


    Read the full report: ‘Hannover 2025: Bypassing the Elephant in the Room with Optimism.’

  • 25 times Medical Data + Pizza: how carbs are transforming healthcare

    25 times Medical Data + Pizza: how carbs are transforming healthcare

    “Covering 5 years, the Medical Data + Pizza event series has followed a compelling timeline, encompassing the period when AI came of age, and became sexy. We spoke with AI professor and co-founder Mark Hoogendoorn about the challenging task of bringing AI to the bedside – in Amsterdam and across Europe – and about the power of pizza to get things moving.”


    Thanks to EdenFrost and the Amsterdam Economic Board, I received a free education in AI by reporting on nearly every edition of the Medical Data + Pizza event. After 25 editions, it was a good time to chat and reflect with cofounder Mark Hoogendoorn, a professor of artificial intelligence at the Vrije Universiteit Amsterdam’s Department of Computer Science.

     Read the full report: ‘What’s in a number? 25 times Medical Data & Pizza’.
    Read my reports from previous Medical Data + Pizza Meet-ups

    Building an AI ecosystem one slice at a time

    It began as a simple concept: to play cupid between two very different beasts. On one side you have the data scientists – always hungry for pizza, but also for real-world problems to solve. And on the other, the medical professionals – who share the pizza hunger, but already have plenty of very real-world problems on offer.”

    “Together with medical counterpart Paul Elbers, intensivist and associate professor of intensive care medicine at Amsterdam UMC, Mark formed the Amsterdam Medical Data Science (AMDS) network. Supported by Amsterdam UMC, OLVG, Vrije Universiteit, PacMed, and Amsterdam Economic Board the network grew rapidly to 2,154 members and counting.“

    “The event has seemingly covered it all – from modelling hospital admissions during COVID to racist algorithms and from making humans and machines get along to predicting the best drug combinations to target an individual’s brain cancer.”

    “…What shines through at each Data + Pizza event is this: people want to make a positive impact on the world, even though they are aware of the monumental task at hand – technically, ethically, regulatorily. There’s a certain shared belief in a happy ending if we all just get to work.”

    “It’s true, I’m a fairly optimistic person by nature,” Mark smiles.

    Onward and upward

    The event snowballed from data scientists and doctors to attract a remarkably diverse crowd in terms of age, gender, nationality, and profession. And of course, as AI became an increasingly hot topic, it only snowballed further. 

    “Meanwhile, Mark is looking forward to seeing how Europe forges ahead with a more people-centred approach to AI – in contrast to the US, where corporations tend to control patient data, or China, where the government calls the shots. Hoogendoorn believes the EU approach is the way forward, even as critics say the required infrastructure will work to slow innovation in the region.”

    “We need these safeguards in place. We have to be careful with potential downsides,” says Mark. “In the long run, I think it will prove beneficial, because you’ll have more support from the public. I don’t think patients are against sharing their data if it helps the next patient. People’s distrust is more directed at the government and policymakers.”

    Read the full report: ‘What’s in a number? 25 times Medical Data & Pizza’.
    Read my reports from previous Medical Data + Pizza Meet-ups.

  • Seeking truth amid a disinfodemic (and why scientists need better storytelling skills)

    Seeking truth amid a disinfodemic (and why scientists need better storytelling skills)

    Welcome to 2020, where we’re fighting both a viral pandemic and a “disinfodemic”. And yes, social media companies say they “deplatform” obviously false theories. But there’s a loophole: if you wrap your bleach-gargling cure in a larger QAnon narrative about Satan-worshipping pedophiles, you might still get listed.

    Over the last summer, Amsterdam Data Science (ADS) and Amsterdam Medical Data Science (AMDS) co-hosted a series of online lectures in collaboration with Elsevier and Google that explored ‘The Power and the Weakness of Data and Modelling in COVID-19’. And they saved the best for last: ‘COVID-19 and the Media’, for which I wrote a report thanks to EdenFrost and Amsterdam Economic Board.

    Read the full report: ‘Media and the lies around COVID-19

    Dr. Marcel Becker from Radboud University argued we need to stop obsessing over defining “truth” and instead apply it as a verb: “truth finding”. After all, even judges and scientists approach truth differently – judges need verdicts with deadlines, while scientists cultivate “doubt and suspicion” as a never-ending story.

    And it only gets messier: Throw social media and identity politics into the mix, and suddenly, scientific information looks identical to the conspiracy theories from your former colleague you just blocked on Facebook. “Fear spreads faster than the infection,” Becker notes, which explains why people hoard toilet paper while ignoring health advice.

    Meanwhile, PhD researcher Emillie de Keulenaar discovered that social media platforms are playing a fascinating double game. Sure, Facebook and YouTube are “deplatforming” unproven theories like “COVID-19 is caused by 5G radiation.” But they’ve found a clever loophole: “borderline content” gets buried rather than banned. So, if you wrap your bleach-gargling cure in a larger QAnon narrative about Satan-worshipping paedophiles, congratulations – you might still get listed.

    The solution? Scientists need to step up their storytelling game instead of hiding behind jargon. As Becker puts it, they should “use the narrative frame – tell stories as journalists have long done” and remember that “doubt is a virtue, not a shortcoming”.

    As Dr Becker eloquently summarises: “YouTube can say the truth does not exist. But given the current situation of knowledge in the scientific community, we can still talk about bullshit and non-bullshit.”

    Read the full report: ‘Media and the lies around COVID-19

  • Should AI be more like oil or O2?

    Should AI be more like oil or O2?

    Who should own the data used for healthcare applications? Is data really ‘the new oil,’ a resource controlled by the few? Or should data be considered a universal human right, like oxygen? After all, we own our kidneys until death, so why not our data? Is there a donor model we can adopt? 

    Usually, when representatives from business, government, and research come together, there are clashes of perspective. Not tonight – so that was nice…

    Thanks to EdenFrost and via the Amsterdam Economic Board, I wrote a report from a World AI Week panel discussion in which the disparate participants all agreed: We need to develop well-thought-out protocols quickly; otherwise, entities like Amazon and China will soon run the show.

    Read the full report: ‘Applying data science and AI to healthcare: Oil or Oxygen?

    Serving the patient

    Jeroen Tas is the Chief Innovation and Strategy Officer at Philips. His daughter has an advanced stage of type 1 diabetes, so his motivation is deeply personal. He’s passionate about providing healthcare professionals with the fullest context of a person’s disease.

    “It’s like how self-driving cars function. You need many different angles and approaches to get a full picture of reality. Only then can it become effective,” Tas observes. “And this is especially true with cancer, since every case is different.”

    And that involves getting everyone on board in sharing data. “I see it as a sort of data donorship – like with organs. And the technologies required are already available.”

    Open, fair, inclusive

    As founder and managing director of the research institute Waag Society, Marleen Stikker fiercely believes in the democratisation of technology and transparency in dealing with data and algorithms. Obviously, she doesn’t want the Amazons and Alibabas of the world to control healthcare data. 

    She believes we need time to think and really understand what we are doing. “What’s in it for the individual? How do we stay in control of all this stuff that we’re told is beautiful for us? Humans seem to have this fear: without tech we will fail.” (Cue her laptop freezing mid-presentation.)

    Her solution is a Digital Commons that allows Amsterdammers – and potentially all world citizens – to determine the data they share, who they share it with, and for what purposes. 

    But who will pay for it?

    The middle path

    All eyes turn to Ger Baron, Amsterdam’s Chief Technical Officer, who believes the city can play a strong role setting up such a system. But Baron admits this process of developing regulations might slow things down. 

    “But in the long run, it will make it all easier. It’s still a mess everywhere – and we’d be first to have such a system before rolling it out across the EU. Just think, for example, about the potential lawsuits that might be avoided.”

    Is the show now truly on the road?

    Read the full report: ‘Applying data science and AI to healthcare: Oil or Oxygen?

    Update: In March 2025, the European Health Data Space Regulation (EHDS) was adopted in March 2025, and full implementation is set for 2029…