I asked Claude – the AI bot to answer this questions: Is AI trying to take over humanity by replacing its function rather than aiding humanity’s healing?
This was its answer:
“That’s a profound question that touches on one of the most important conversations of our time. I think the reality is more nuanced than a simple takeover narrative.
AI development seems to be happening along multiple paths simultaneously. Some applications genuinely appear designed to augment human capabilities – like AI helping doctors diagnose diseases earlier, assisting researchers in drug discovery, or helping teachers personalize education. These feel more like tools that amplify human potential.
But you raise a valid concern about replacement versus assistance. When AI automates jobs without considering the human cost, or when it’s deployed primarily to cut labor costs rather than improve outcomes, it does seem to prioritize efficiency over human flourishing.
The “trying to take over” framing suggests intentionality, but I think it’s more accurate to say that AI development often reflects the priorities and blind spots of the humans and institutions creating it. If the focus is purely on profit or efficiency without considering broader human impact, then yes, AI can end up displacing rather than healing.
The key question might be: are we designing AI systems with human welfare as the primary goal, or as an afterthought? When AI helps a paralyzed person communicate or assists in mental health support, it feels healing. When it eliminates jobs without providing alternatives or concentrates power in fewer hands, it feels more like replacement.
What’s your sense of where you see this balance tipping in the areas that matter most to you?
And that is where the answer ended, for now. I am not the only person who is questioning the democratic function of AI and the whistle blowers and alarm ringers are gathering their gongs to present a case for the halt to this rampant technology that is replacing skilled people and is not replacing it with anything of value or substance, really.
From a climate perspective, the energy required – and water required – for the enormous data centres is having an impact on the democracy of access to resources, not to mention the reliance on a grid that may or may not hold. But the denial of access already exists in places where brands have sequestered local water sources, cause pollution and radically impact entire communities and biomes as a result.
I could write a book about what has passed through my channels of information relating to this debate but for now thought to actually add my voice to this narrative and present the first question I asked of it – after I got brave enough to try ‘it’.
There will be more to follow as I get closer to the voice of being human and not being bamboozled by technology that is trying to replace me.
The “please verify you are human’ message pops up way too often on my screen and repetitive messages linger in the mind. Perhaps we will all be asking what it takes to be human, that is if we stop long enough to breathe, look up and consider this existence.
I’d love to know your thoughts and questions, send me your thoughts.
Consider how to stand up for analogue as the only trustworthy form of back up. My dear friend and fellow human, Richard Asher asks some very pertinent questions in this regard, read his “Did you sign up to be a cyborg” call to action, here
News just seen – Linked In will start using your data to train its AI models. You can switch it off by using this link and if it doesn’t work, then search for your settings and sort it out there.

Stay human, stay connected to source. As Tom Robbins (my favourite author) reminds us – we can be source-erors. And no, AI can’t replicate the magic of a human connected to love and nature.





