Back to Blog

Will AI take your job?

John Tripodi
Will AI take your job?
Short answer is No. Long answer, read on. (take vitamins to get through this one!)

What is AI?

A 70-year-old mathematical and computer science field from the 1950s, for the ability of computer systems to perform tasks that typically require human intelligence, such as learning, reasoning, and problem solving. Initially viewed in the academic field as a fringe pursuit, AI began forming in earnest in the 1980s–90s, took off in the 2010s, exploded in the 2020s, and became an international phenomenon from 2024 with China formally entering the top tier fold by using cheaper open source alternatives.

Is it intelligence? It’s something, but it’s not human intelligence. Talk of a PhD level assistant(s) using the current AI models is not accurate, and quite fanciful. We don’t even have an agreed understanding of what ‘intelligence’ actually is. Debates continue in philosophy, science, education, psychology, neuroscience and literature over understanding, consciousness, embodiment, and agency. The specific AI field has some definitions it uses and ‘machine learning’ has been around for a couple of decades, but it’s another thing to qualify as a human equivalent in the real world. The current AI models under development are therefore not without their critics:

“AI progress today is like alchemy — results without principles. We need a science of intelligence.”
Yoshua Bengio, Turing Award winner

Who is playing in the AI sandbox?

Appears that the tech battlefront will be fought between the US and China as no one else has the stomach for it (i.e. they’re too far behind with unclear benefits to drive investment). The US big tech firms may get the headlines and investors’ $billions, but China is not standing still. At the time of posting, China just dropped their 1500th AI model in 2025 for this year, practically more than the rest of world combined, and announced that AI will form part of their primary school curriculum. China’s models tend to be open sourced which means their source code is available to developers, and they’re also free. They use alternative formulas using far less compute power and therefore energy to run, from 75%-90% less than most of the US counterparts is not uncommon.

While this is happening, some US tech CEO’s and media tells us gleefully that AI will take all our jobs… Really?? Even though industry commentators deride it for lacking common sense and context, it’s prone to making errors a.k.a. ‘hallucinating’, and even Elon Musk says AI cannot invent anything. Lawsuits are pending as the AI chatbots retort dastardly responses to vulnerable users.

A leak from the AI hot air balloon?

If Michael Lewis were to write The New New Thing (1999) today, AI would be front and center with the Tech and Investment communities proclaiming its promises. In fact, Lewis ends his original book highlighting that tech innovation is always chasing something just out of reach. That was 26 years ago, and we recognise the same pattern with AI.

How are the AI promises going? Many of us use it for improving our words, creating cute pics and videos. Others are using it to speed up or help them write code - known as ‘vibe coding’. Is it more than sophisticated Google search that doesn’t give you links with hopefully the right answer? A new way to automate systems and processes? Will it take all our jobs? Appears they’ll be some respite as the phenomena that is Google search is still growing despite the uptakes in AI models such as ChatGPT.

How is the ‘intelligence’ part of it progressing? One cognitive scientist said of AI 4 or 5 years ago, to paraphrase, that ‘if California wants to spend billions of dollars on LLMs (Large Language Models) to develop better transcripts, well, go ahead. But it is not going to advance the science.’ This has always stuck with me when I think about what’s going on in AI – does it have context? Is it thinking and reasoning? Does it have understanding? Does it know what it is doing? Can it learn? Can it predict?

I suspect if Lewis has another book in him, he could write about how the 'AI revolution' attracted so much fanfare and investment dollars that nothing else really mattered. He would discuss how some of AI’s proponents warn us to fear it - which I’ve never really understood. We’re not living in a dystopian sci-fi movie. The AI researchers do not talk about it like this, but some of its biggest business leaders do, or at the very least, tell us (gleefully) that it will replace our jobs.

Will AI go the same way as Big Data before it and become a feature, not a revolution? From the 2010’s, there’s no evidence of improved or increasing operating margins, or financial performance of ‘data-driven’ leading Fortune 500 companies in finance. Early signs for AI are so far proving to be the same, with only Google’s Sundar Pichai saying AI tools have made their software engineers ~10% more productive. Thinking another way, if data was everything then every Hollywood movie would be a hit and every team play would score a goal. Are we falling for the same ruse once again?

The AI stories are interesting and the pitfalls are entertaining. Lewis could spend a few paragraphs, for example, on Builder.ai, once hailed as one of the world's most promising AI unicorns (over $1B US valuation), failed this May despite $100MMs of multiple investor round funding, including from Microsoft. Their CEO founder, self-anointed Chief Wizard, Sachin Dev Duggal, once jokingly quipped in a speech he gave 5 or so years ago that at his company, 'AI doesn’t mean 'Another Indian'. But in actual fact, it definitely was exactly that! 700 humans in India filling in for the AI app that was supposed to allow non-technical folks to build their own apps from prompts and instructions. There was no AI found and this scam went on for 8 years.

Recall the Amazon physical store where it automatically charged you for your goods without a checkout? That too was a bunch of tele-operators watching what you picked off the shelves and charged your Amazon account for each item accordingly. The human element of these AI models should not be underestimated. The biggest models actively use sub-contractors in Kenya and other low-wage countries to, for example, sift through visually arresting and non-family/children friendly content to remove it from the models’ output. It is reportedly extremely trying work with little to no policies or programs to assist these data workers. There are many reports of psychological issues and family breakdowns after only a few months’ work. It is known that some of these outsourced contract firms have stopped performing this type of work for the AI giants.

Unfazed, the latest moves in this high stakes game is to pay the best and brightest in AI astronomical salaries, with the top 20-30 AI scientists and engineers all earning between $10m and $100m each year. A bidding war for AI talent has hit another level with Zuckerberg from Meta personally emailing prospects with huge ‘Cannot say No’ offers to join his AI team. They’ll need manager/agents next, like sports stars and Hollywood’s finest. News just in: Meta offered an ex-Open AI scientist $1.25B over 4 years, which they actually turned down to stay at their own startup. More news just in: a 24yo AI coder just accepted a $250m offer in stock and cash over 4 years from Zuckerberg (Meta doubled it after he didn’t take a $125m offer).

The stories help with the whole AI hype cycle. At the same time, all the Big Tech firms are positioning themselves to be the AI leader which include Microsoft, Google, Meta, Amazon and Tesla. For Meta, Zuckerberg has just said they want to build ‘personalised super intelligence’. They want to keep you on their platforms and are investing $$Billions to do it.

Next wave of AI – the Agentic AI task-masters

ChatGPT says: Agentic AI refers to artificial intelligence systems that can take autonomous actions to achieve goals, often over extended periods and across different environments or tasks. (This is the only AI generated sentence used in this blog!).

From finding and booking accommodation on your next big family trip, managing your inbox and calendar, financial portfolio management, automating business reporting, to finding your next investment property. A tireless assistant that operates within the boundaries you set, freeing your time and attention.

ChatGPT dominates the industry, but there are many large scale AI engines in market and they’re becoming more narrow and specialised. They are in constant development with new releases deployed quarterly, with the industry seeing updated versions monthly, and even weekly, across the board. To help outline where the agent AI models are today, a Carnegie Mellon May’25 research paper ran the top AI engines through 175 real-life workplace agentic tests.

What were some of the results?:

Mellon AI Table
Mellon AI Table

They failed miserably! We shouldn’t find these results encouraging. Google’s did best with only a 30% success rate. Agentic AI doesn’t seem to handle the multi-steps involved for the completion of basic tasks.

AI tools are optimised for generation, not comprehension. Maintenance of complex systems requires a high level of comprehension.

Some get 3-4 steps in then fail. Billions upon billions spent. Massive resources and energy on increased compute over many years. But they’re not ready. White-collar jobs are safe for now we might think. (Although if your job can be off-shored, by the US tech industry's experience, it will. But that’s another story…).

The research house, Gartner, predicts 40% of agent AI projects will be cancelled by 2027 due to rising costs and unclear business value. With results like above from the biggest and most heavily funded, we should query where the forecast remaining 60% will be after 2027.

Gartner adds, of the 1000s of AI agents out there, only ~130 are actually real. Companies are frantically rebranding AI assistants, chatbots and RPA tools as agentic AI to ride the current hype wave. It is reminiscent of the .com boom, where companies added ‘.com’ to their names with barely a change in the business.

Performance issues

The AI engines do some amazing things and provide quick answers to questions with a human-like voice. But, they’re also known to ‘hallucinate’ where they are completely wrong and can make things up that aren’t true. The AI models have achieved so much in natural language processing, where English has almost become a computer code language, but it’s not complete. The AI will make things up that are wrong to fill the gap or completely fail. The AI engines also don’t perform very well where most of us would assume they would.

Lawyers have come a cropper citing made up case law in their court submissions and were facing disbarment, including in the Family Circuit and Family Court of Australia ; academics have been cited in false papers for things they did not say; journalists have reviewed books from famous authors that were never written.

In an embarrassing example recently, the Atari 2600 – the iconic game console released in 1977, nearly 50 years ago, beat OpenAI’s ChatGPT in a simple game of chess. The AI did not learn how to play chess. It was said that ChatGPT ‘made enough blunders to get laughed out of a 3rd grade chess club’. Chess, which follows rules, is considered a ‘high-complexity task’ where both standard AI models and their Large Reasoning Models (LRMs) variants experience complete collapse.

But this is the biggest revolution since the internet, electricity…or fire, and it’ll take your job! The big AI models have been missing on the simplest rule following computes and this has been an industry inside joke. Until recently, they infamously couldn’t tell you how many letter ‘r’s there are in ‘strawberry’. (Don’t worry about trying it now, they’ve since fixed it!).

So, what’s the problem with AI?

Let’s take the current case of Apple to highlight. Apple is facing lawsuits for exaggerated AI claims, while recently calling out and proclaiming that AI has a long way to go. Apple's research published in June states: ‘…today's Al isn't reasoning, but using advanced pattern matching that collapses when problems get too complex. Unlike true intelligence, more computing power or clearer instructions don't help, the models simply hit a hard wall. This indicates we're seeing the limits of memorization, not a path to AGI, suggesting current Al faces fundamentalbarriers that more data and compute alone can't overcome.’

This is important. Apple is saying that the current pathway of more data, bigger LLMs and more powerful compute is not leading to AI or AGI outcomes.

Learned researchers have had the same conclusion for years, that LLMs are not a pathway to AI. The new models we’re seeing today put a new twist on this with their natural language responses and familiar layouts. How are we to know when the AI is flat out wrong and hallucinating? It introduces a new kind of cognitive hazard for us to deal with. A problem here then is that the concern isn’t just that AI is wrong, but that it is convincingly wrong. See @Yann LeCun for more on this, as well as the difficulty with video training below.

Much has been made of LLMs training on the world’s data with massive compute (eg. using Nvidia’s GPUs), but this is different for video. We cannot train AI on a video. Researchers have been trying for 20 years to have AI predict what's going to happen next from a video. Predictive text has been around for a while, and LLMs can be trained to know what the likely next word and words will be. This is what LLMs do best – as probabilistic text engines. However, to represent a probability distribution of all possible frames in a video or all possible missing parts of an image, is not something we can do. It seems it’s not enough to break it down frame by frame and apply ‘neural networks’ with ‘transformers’ or ‘3D convolutional models’, as we only end up with a sophisticated form of ‘pattern matching’, but lacking context and understanding. Researchers say ‘it's mathematically intractable to represent distributions in high dimensional continuous spaces.’ I.e. the next image cannot be completed.

Other researchers have labelled LLMs ‘stochastic parrots’ that generate text based on statistical pattern probabilities without understanding the underlying meaning. Does an LLM know what an object is? Or a pixel? No, it does not.

It would be wonderful if we could, but we don't have self-driving cars that can train themselves to drive in 20+ hrs like a 17 year old.

The key to the AI problem is: We actually don’t know how humans think! Nor do we have agreed definitions for Intelligence, or even for AGI itself. Artificial General Intelligence implies that these machines will need to have consciousness to function like any human or sentient being. This is not always a ‘productive’ quality that is measurable and just for work. Consciousness is also about thinking, reflecting, remembering, pondering, emotional being, and daydreaming, to name a few. It is difficult to describe and define, and like intelligence, it doesn’t have a full consensus on what it is. If we cannot describe what consciousness is or how it works, then we cannot claim 0’s and 1’s of computer code can magically obtain it.

Language (text) is not the whole world. AI based on training on LLMs, on language alone, will not get us to true AI. Furthermore, if LLMs are training on the whole of the internet where 50% of it is said to be ‘bots’, then we have a giant chatbot trained on other bots.

The Philosophers

Fundamentally, we are more than just ‘sense data’ gleaned from our eyes, ears and noses, and interacting with the physical world. Since Socrates and Plato first popularised these ideas 2500 years ago, we have had the philosophers Immanuel Kant and more recently Ludwig Wittgenstein build on this by arguing that we humans are born with ingrained and innate knowledge ‘apriori’ as part of the human condition having evolved to live here over thousands and millions of years.

LLMs are very much like the concept of the Chinese Room Argument introduced by John Searle as a thought experiment in his 1980s paper, ‘Minds, Brains and Programs’, designed to prove that computers simulating human thought and intelligence is false, and they do not exhibit it. For Searle and many others, computational AI or AGI remains a fictional concept.

Without going into a modern day science class here, the scientific consensus and cognitive scientists working in the field, still agree with our 18th and 20th century philosophers. From what I understand, the big AI models and now trying to incorporate ingrained knowledge into their chatbot starter’s block. It’s going to be an incredible (impossible?) task to build human intelligence and consciousness into their machines. Long way to go if they’re going back to the drawing board.

AI’s two big other problems for investors

[1] Profitability: GPU computing lacks monopolistic practice power and quickly becomes commodity priced computing. Means no one really owns this IP right now and no one is making money from it (apart from Nvidia providing the chips). They're all stealing from each other right now, as well as scraping over every corner of the internet and beyond. Allegedly from places they shouldn’t with copyright and legal battles looming. The AI industry agrees to the copyright aspect but claims it’s all in ‘fair use policy’. If not sorted, this could potentially up-end the entire AI industry! Some commentators don’t think AI will be profitable. They claim when those GPU stock piles get large enough and demand inevitably drops, it will lead to massive capital destruction coming across the industry – the bubble bursts.

[2] Expensive: The technology is not scaling. AI is super expensive and not getting cheaper. As a new technology, it started expensive and got worse. Which is not the way it usually goes for mass adoption of new technology. Information costs dramatically fell with the internet and has continued to get cheaper as digitisation becomes more pervasive. Conversely, in the last 3 years, AI has gotten more expensive as the models keep getting larger.

For example, Google’s Gemini 2.5 is vastly more expensive than any of it's predecessors to run. Though it might be the best at the moment according to Mellon’s research (above, and this all could change by next week!), it's very expensive and they cannot keep throwing money at it. Further, the "AI Overviews" feature that Google added to search results in 2024 is estimated to consume 30 times more energy per query than just returning links. This feature was enabled by default and certainly helps gets Google’s ‘AI users’ numbers up.

Other commentators say that LLMs are at their limit and now in the diminishing returns phase. But AI is still on a path of bigger LLMs with more compute, is better. The US AI industry seemingly doubled down when the vastly cheaper open sourced AI alternative from China, DeepSeek, was launched in 2024. It tanked US tech stocks for a day or two but they recovered after Sam Altman from Open AI got up and spruiked the goodness of his and US industry’s models. Plus, although DeepSeek performed very well on some AI parameter scores, it was found to be limited on others, and likely deriving it’s success from ‘standing on the shoulders of giants.’ Still, it highlighted that the Big Tech models were vulnerable to an algorithmic innovation. So, the AI industry shrugged it off and kept ploughing ahead. The claims made a few years ago, and that continue to be made now, that break-throughs will help bring costs down, have not materialised. Hopium is not a strategy, it’s a gamble.

Environmental impacts

AI in it’s current form requires huge compute and therefore massive energy. In Australia in 2024, data centres currently use approximately 5 per cent (1,050 MW) of the electricity on Australia's power grid. However, we should note, that despite this substantial energy use, data centres can contribute to energy savings by driving digitalisation across various industries, processes and infrastructure. It may be too early to say whether expanding the use of AI data centres will produce savings elsewhere.

At the moment, energy demands continue to rise. AI data centres are estimated to require 50% more power over traditional data centres. Apparently, 50% of the massive energy required to run AI data centres is for cooling alone. Water, of the fresh variety, is often needed to facilitate the cooling of the huge arrays of servers and sheer processing power that even the simplest text prompt requires. Controlling for heat as a by-product for the energy requirement is a major factor and a reason why these data centres cannot be two story. Vast flat tracks of land are needed. New data centres incorporate solar roofs, and water and energy recycling, but their power costs and use is some of the largest of all industries. Globally, they are only behind steel, cement and aviation in energy use. It’s a new frontier with the largest taking 3-6 years to build.

Big Tech like Google, Microsoft and Amazon have started backing away from their ‘net zero’ commitments due to their AI pursuits. It doesn’t sound efficient or ‘good for the planet’ right now, unless a few footy fields of solar panels were put up next door to the data centres to keep the lights on, blow the fans and pump the water 24/7. I actually have no idea how much power is required to run these things, but it sounds mega huge!

At the local level, Councils have expressed concern at the amount of water required to run these new mega AI data centres. I’m also told that that these data centres require large diesel generator back ups and the fuel must be removed, wasted and replaced at least every two years (perhaps annually). Thousands upon thousands of litres at a time.

Edit – ok, I checked the energy use and yes, the numbers are big. Let’s say an AI heavy data centre requires 300 MW of electricity, this equates to over 500,000 litres of diesel required on-site to ride out 2-3 days of power interruption. This is costly stuff. And how about this: it is estimated that a few ChatGPT prompts uses ~500ml of water. However, the numbers get larger from here. Sam Altman’s Open AI’s data centre in Texas requires 1GW (1,000 MWs) of power which is more than some US cities. Big Tech is not slowing down, and it’s AI quest is making it the Big Polluter of our time.

Meta announced recently huge $10B AI data centre builds of 5 GW. In the US, 1 GW of continuous power is the equivalent to around 1m homes.

So, this is, unprecedented mammoth scale. Altman said now is the time to solve for fusion and the thought experiment of Dyson Spheres(!). They’re virtually admitting the current trajectory is impossible. The data centres tend to locate in arid and flat lands where water can be scarce but state and county governments provide incentives. There are homes right now in the US that do not have sufficient water since the data centres moved in. Household power bills in these areas are also going way up. This technology is not the bastion of sustainability and nor is it making friends locally.

Aussie software developer, Atlassian co-founder and Ironman competitor, Scott Farquhar, in a speech at the National Press Club this week sees an opportunity and said Australia could be poised to become a global regional hub in data centres powered by renewable energy.

"We should export megawatts as megabytes for potentially megabucks. This could be a $10 billion-plus opportunity."

He highlighted Australia could service South-East Asia, as "There are more users of ChatGPT in combined Indonesia and Vietnam than there are in the United States." He says we have a competitive advantage in building and operating AI data centres from abundant energy, clean energy, and a stable rule of law (I’m sure he’s thinking about the water needed too). A general need for data centres is known. Whether this automatically translates to AI demand is what’s being questioned here and will need to play out.

The Reality

What’s interesting is that AI isn’t being used that much at the leading AI firms. Microsoft engineers are having a hard time using it. Microsoft said they’ll essentially be forcing their employees to use their AI products because adoption is too low. Managers were instructed to incorporate AI use into their performance reviews. Why aren’t they using it already? I had a similar experience last year when I worked at a global management consultancy in their Melbourne office. One of the big tech firms were offering tens of thousands of dollars as grants to perform discovery on viable use cases for their AI product. In 2024, all the way over here in Melbourne, a Silicon Valley super giant, a global leader in AI, was looking for us to come up with new ideas on their products. I don’t think any use cases were put forward for the grants.

Anthropic is one of the world’s strongest proponents of strong AI and how their machines will most definitely change the world and take you job, has been hiring hundreds of new software developers lately. They run the Claude AI model and one of the best known for ‘vide-coding’, so the irony (and hypocrisy) is not lost on me. According to them, AI will be capable of automating nearly every white-collar job in the next 2-3 years. This is hype.

Companies that went all in on AI are bringing back humans for critical processes.

The cost of AI mistakes consistently exceeds the cost of human review, forcing businesses to add (back) verification steps they hoped to eliminate.

Klarna, the Swedish BNPL (Buy Now Pay Later) fintech, recently had to re-hire its customer service department because its quality and customer satisfaction dramatically declined. In 2024 they’d been replaced by AI chatbots and it caused a customer backlash. The CEO and founder had bragged that he hadn’t hired a human in a year because his AI chatbots were doing the work of 700 employees. He recently walked back this strategy in May and repositioned his pre-IPO company as becoming the ‘best at offering a human to speak to.’ Anecdotally, I’m hearing that tech users in general are avoiding using vendor support if it has gone all AI chatbot.

Companies are responding to what I call the ‘AI gap’. They realise it’s too early to up and shift to these new AI technologies en masse and that in doing so can put their business at significant risk. Successful AI implementations are shifting from full ‘all-in’ automation to practical augmentation and optimisation that enhances rather than replaces human judgement. This is kinda a new thing and the consultants are having a field day overseas.

Others are questioning why companies are moving so fast with an immature technology. They don’t see how a company would benefit effectively beta-testing this technology. Why not wait for it to be ready to be deployed? They don’t see how companies will be left behind or at any disadvantage if they simply dropped the AI in and ran with it once it is ready. This mad rush to get on to the latest tech is highly distracting and all too frequent.

Crude Summary: It’s over-hyped, it doesn’t work as promised and is too expensive anyway.

Need Expert Finance Advice?

Our team of experienced mortgage brokers is here to help you navigate your finance journey.

Get in Touch