12 things AI tech experts wish you knew

Obviously GenAI generated robot holding up 7 fingers. Not 12...

I’ve had the pleasure of talking with hundreds of people working in data science and AI. Luckily, since they spend most of their time fiddling with data, they are a patient lot. It’s been one long masterclass. More often than not, I ask the question, ‘What do you wish everyone knew that would make your job easier?’ Specific patterns emerged from their answers…

I am not particularly a techie – I am more interested in the people and stories surrounding the tech. But you need to understand what it’s all about. So, my favourite first question is a bit of a cheat: “How do you describe your job to your favourite relative who’s naturally interested in what you do but doesn’t have a tech bone in their body?”

I achieve a few things with this question: 1) I set myself up as harmless, and 2) I usually get a thoughtful, straightforward, and jargon-free response – something the world needs more of.

I also shamelessly appeal to their emotions by putting a name to this relative. For Dutch AI professionals, I cast myself as their Tante Truus. For those with Chinese roots, I volunteer as their 阿美姨. Russians naturally get Дя́дя Ва́ня, Hindi speakers get सुनीता मासी, and for Canadians, I cut to the chase: “Pretend I’m your happily eccentric Uncle Steve who loves you very much.”

It breaks the ice and gives me insight into what they do. In gratitude, I flip the perspective at the end: “What do you wish everyone knew that would make your job easier?” 

And as with any AI product, you need metrics to evaluate its usefulness – which can be tricky. My metric for choosing the following insights was based on hearing them at least five to ten times – not so tricky. 

While most are passionate about their jobs, they never expected their jobs would ever induce passion.”

#1:

AI experts are bemused by all the AI hype

These are exciting times to work in data science and AI. Those who’ve been in the field longer than a few years have watched their once academic and niche profession become sexy. While most are passionate about their jobs, they never expected their jobs would ever induce passion. Many feel blessed to be living through such dramatic times for their trade. But it’s all still pretty weird.

#2:

AI is a marketing term

“AI” is merely an all-encompassing marketing umbrella. For those working in “AI” for a while, they’ve likely seen their job title change multiple times. Once upon a time, they may have been computer scientists, cybernetics researchers, information retrieval specialists, statisticians, pattern recognition engineers, knowledge engineers, data miners, business intelligence analysts, etcetera. 

Around a decade ago, they started getting lumped together as “data scientists” – just as the field started to get more public interest as a viable career path with viable paychecks. Today, most now have “AI” in their title. As one industry veteran told me, “I’ve had five different job titles over the last 20 years, even though my job hasn’t changed that much.”

#3:

Most real AI work is still “meat-and-potatoes” stuff 

Yes, there’s a happy rainbow of AIs, ranging from simple rule-based systems (which doesn’t actually qualify as AI – see below) to advanced neural networks and large language models (LLMs) like Claude and ChatGPT. However, while all the current hype centres on Generative AI, traditional AI still achieves most of the important tasks. 

Old-school Machine Learning (ML) finds patterns in data to make predictions – like spam filters that learn what suspicious emails look like from thousands of examples. It’s why translation programs have also improved so dramatically. And ML has also certainly proven its worth for healthcare/medical diagnosis, climate/environmental solutions, and education/knowledge access. These three areas share common characteristics: they address fundamental human needs, have compounding positive effects across society, and leverage ML’s unique strengths in pattern recognition and optimisation at scale.

Most ‘AI’ today is really ‘pattern recognition on steroids,’ as GenAIs like Claude like to call it (repeatedly).

GenAI goes one step further. Instead of just analysing existing data, it creates new content. As a text-based subset of GenAI, LLMs are trained on massive amounts of text to predict what words should come next, which lets them have conversations and generate human-like text.

Most “AI” today is really “pattern recognition on steroids,” as LLMs like Claude like to call it (repeatedly). It’s powerful and useful, but it isn’t the sci-fi artificial general intelligence that can think like humans across all domains (yet). Meanwhile, when companies attach “AI” to basic automation or simple algorithms, they’re typically overselling what’s truly happening under the hood.

LLMs, image generators, and code assistants are genuinely impressive – and represent real advances in how machines work with human language and creativity. But fundamentally, they’re still sophisticated pattern-matching systems. And we’re still at the very beginning of figuring out their usefulness (and downsides). 

“It can be amazing and amazingly wrong”

#4:

They are not all AI evangelists: “the truth is in the middle”

Some people are very optimistic about AI, while others are very pessimistic. However, if you actually work in AI, you’re usually neither – especially when it comes to GenAI. The truth is in the middle. “It can be amazing and amazingly wrong” is an often-repeated observation. And recent research seems to be saying that the latest frontier “reasoning models” have serious limitations (that said, it’s these type of papers that also tend to get overhyped by people who wish AI would just go away and leave their job alone). 

Meanwhile, various tricks have been employed to make a GenAI’s output more reliable, with the most basic being the retraining of an algorithm based on feedback from human subject matter experts. RAG (Retrieval-Augmented Generation) was a significant leap forward since it enables a GenAI algorithm to tap into data sources that are more reliable than, say, that World Wide Tissue Of Lies called the internet. Now, “Reasoning AI” appears to be another method of reducing hallucinations by taking more time to consider options and self-checking. “Agentic AI” is the latest hype people are embracing as the “big answer to everything.” However, many still believe the next revolution is yet to come (or perhaps we’re already nearing the outer limit). 

Meanwhile, the best course of action, as a friend’s journalism professor always said: “Even if it’s your mother saying she loves you, always check the facts.”

#5:

Don’t be a space case, focus on the use case

Too many people still believe AI is a catch-all solution that can solve everything. And sure, frontier GenAI algorithms are improving rapidly as generalists. But if you genuinely want to solve a problem, you must first dive deep to determine the real issue and see if AI might provide a handy solution.

Sometimes it’s just easier – and faster – to wash your own dirty dishes.

That’s not to say GenAI cannot be part of a solution. The way LLMs can handle natural languages is already working to democratise R&D by making it easier for non-English speakers and those still early on in their careers to find the information they seek.  

But in short, AI researchers focus on specific use cases based on business and/or personal strategy, not “AI strategy”. Where can I gain value for myself or my customers?

“With few regulations and shifting vocabularies, many companies freely use whatever terms make a sale – when in fact their AI is weak or non-existent.”

#6:

There’s a lot of fake AI out there

Many companies slap “AI” labels on basic statistical analysis or simple rule-based systems. Real AI requires machine learning, not just automated calculations.

AI is still enjoying its “Wild West” moment. With few regulations and shifting vocabularies, many companies freely use whatever terms make a sale – when in fact their AI is weak or non-existent. The result is a confused and disappointed marketplace that only slows the road to actual innovation.

That ChatGPT can generate a wholly false answer while still sounding convincing is the same sort of limitation some companies exhibit when they crow about their AI abilities – it’s pure hallucination.

“It’s more like experimental cooking where you try, fail, adjust, and try again dozens of times.”

#7:

Don’t call them software engineers 

Sure, some of their best friends might be software engineers, and there’s plenty of coding required to create an AI product. But they’re two separate beasts.

Software engineers build reliable, scalable systems that solve known problems – it’s about implementation. With AI, you’re constantly trying to find an answer – it’s about research. “You need a certain tenacity to produce a tangible, useful, market-fitting product,” as one expert told me.

“It’s no longer about building with code to get the output you want. The AI solution is about playing with the data. Data scientists are very much closer to the problem than a coder. Their work is very deep and very contextualized.”

In short, AI development is iterative, not linear. It’s more like experimental cooking where you try, fail, adjust, and try again dozens of times.

“At the end of the day, serving the greater good is always good for any brand.”

#8:

It’s all about data. So shut up about AI, until you get your data act together

It’s an industry mantra: “garbage in, garbage out.” The quality of AI output depends entirely on input data quality. No amount of algorithmic wizardry can fix fundamentally limited, flawed, or biased datasets

First and foremost, you need to get your data house in order. Ideally, you should follow the universal principles of FAIR (Findable, Accessible, Interoperable, Reusable). Yes, you can hold onto your IP and make your money – that’s how the system works. But that doesn’t mean you can’t organise your data in a semi-universal way. One day, you might add your compatible data to other compatible data and together discover patterns that lead to a breakthrough in, say, cancer treatment. And at the end of the day, serving the greater good is always good for any brand – and therefore the bottom line.

COVID brought this insight to the forefront: shared data can improve outcomes. Faced with an emergency, “what would have taken years took months.” Lives were saved but it took a whole lot of data fiddling…

#9:

Indeed, it’s fiddly work (and often far from sexy)

It’s often said that data scientists spend 80% of their time cleaning and preparing data, and only 20% building models. This fiddle factor extends tendril-like in all sorts of directions.

That AI model from last year that was 99% accurate? It’s now performing worse because the world changes. AI systems need constant maintenance and retraining. Meanwhile, there’s a real trade-off between explainability and performance. You can have AI that works great or AI that you can easily understand how it works – but rarely both. Complex problems often require complex solutions. And while AI can be amazing at spotting patterns, correlation isn’t causation – AI doesn’t explain why relationships exist. That’s on you. 

Every algorithm has limitations and blind spots. There’s no perfect, universal AI solution. Hence: fiddle ad nauseum

And understanding this reality helps set more realistic timelines and expectations.

“If the AI product or service enhances someone’s job and/or lives: great. If it doesn’t: screw it. Humans are finicky that way.

#10:

It’s all about hybrid intelligence not artificial replacement

Human-machine collaboration is key to any successful AI application. Technology should enhance human activity, not replace it. Many argue for changing the meaning of the acronym “AI” from Artificial Intelligence to Augmented Intelligence.

Humans are always “in the loop.” Initially, they serve as expert BS detectors to properly train the algorithm and guide its continual improvement. 

Furthermore, humans are essential for adopting the end products, emphasising the importance of strong UX and managing change. Naturally, one should avoid jargon like “change management” or “digital transformation” when trying to win over a potential end-user. People will roll their eyes, “Oh, here’s management with their shiny-thing-of-the-month.”

If the AI product or service enhances someone’s job and/or lives: great. If it doesn’t: screw it. Humans are finicky that way. 

“We should remain healthily scared of scary people – they’re likely the ones who will do the most damage.”

#11:

Don’t be scared… yet

AI isn’t magic. It’s just really good at finding patterns in data, not at common sense or understanding context like humans do. But it’s tricky… To reiterate a previous quote: “AI can do amazing things, but there are aspects that scare me.”

Specific sectors are likely to experience employment chaos. This has a name: “creative destruction.” This happens whenever a new technology comes to town (like steam, mechanization, electricity, or computers), and humans spend years (or decades) muddling about to figure out how to harness it before the advantages become manifest and there’s a net-gain of jobs. On some levels, the already-worn cliché seems likely to become true: “AI won’t take your job, but someone using AI might.” 

But you shouldn’t be scared – just vigilant. Maybe, you can start by reading this great article: ‘A.I. might take your job. Here are 22 new ones it could give you‘.

In terms of full-blown apocalypse, technology may evolve to the point where we should be scared. That’s why many people welcome smart regulation – in Europe, anyway – whereby everyone knows the rules by which to play.

Of course, we should remain healthily scared of scary people – they’re likely the ones who will do the most damage. Happily, most people in the field still describe themselves as “cautious optimists”.

#12:

There’s huge potential for job satisfaction

Despite all the doomsday scenarios, data cleaning and fiddly work, AI is still an exciting field with lots of opportunities to make an impact in terms of making the world a better place. You don’t have to work on improving click-through rates for buy-buy-buy campaigns. You can work on a cure for cancer instead – and the salaries are also equalizing across all industries. So that’s nice: getting paid well for being a do-gooder. 

The bottom line

These experts have all done a great job of describing their roles in relatable terms. So here’s my challenge: ask them yourself. Most are happy to explain what they do, especially if you approach them like their favourite relative who’s genuinely curious about their work – and show basic respect by having at least a splinter of a tech bone in your body.

And remember: in a field moving this fast, today’s expert insights are tomorrow’s basic knowledge. These conversations will continue, and I will likely be forced to update this article later this afternoon. 

Facebook Twitter Tumblr Email