When Death Gets an App Update
AI, assisted suicide, and the quiet erosion of the Imago Dei
It takes a special kind of cultural numbness to hear this and not stop breathing for a moment.
Dr. Philip Nitschke, the Swiss inventor of the “suicide pod,” is back. This time with a version built for couples. Two people. Side by side. Pressing death-inducing buttons together. A synchronized exit.
Romanticized as “connection.” Marketed as “autonomy.” Wrapped in the language of dignity.
And now, the newest upgrade: no psychiatrist. No human evaluation. No face across the table asking hard questions or noticing trembling hands.
Instead, an AI avatar.
You take a test online. A machine decides whether you are mentally fit to die. If you “pass,” it gives you permission.
Let that sentence sit for a moment.
A machine. Trained by humans. With their biases, philosophies, blind spots, politics, and quiet assumptions about suffering… now authorized to assess whether a human life is worth continuing.
This is not progress. This is abdication dressed up as innovation.
We are not just building technology anymore. We are building moral gatekeepers.
And worse, we are pretending they are neutral.
AI does not descend from heaven, stainless and objective. It is catechized. Discipled. Formed.
Every dataset is a doctrine. Every algorithm is a confession of belief about what matters, what counts, what is burdensome, what is expendable.
So, when an avatar tells someone, you have the capacity to die, what it is really saying is: “The values of the people who trained me find your continued existence optional.”
That is not medicine. That is ideology with a user interface.
Mental health is sacred ground. It is where trauma whispers and shame hides. Where despair lies about what tomorrow could hold. Where people say things they don’t mean because pain is loud and hope is quiet.
And we are proposing to hand that holy terrain to software.
To reduce anguish to checkboxes.
To translate despair into data.
To let something that cannot love to decide who is rational enough to disappear.
Christians believe something dangerously old-fashioned: That human beings are not problems to solve. They are souls to shepherd. They bear the image of God, not a risk score.
The Imago Dei does not fluctuate with productivity. It does not disappear with disability. It does not evaporate when suffering becomes inconvenient.
Yet assisted suicide regimes already show us where this logic goes.
When states will fund death but not palliative care.
When “choice” begins to look suspiciously like pressure.
When the sick start apologizing for being expensive.
Now add AI. Now add automation. Now add scale.
This is how moral weight gets laundered. No doctor. No counselor. No human eyes to meet. Just “the system.” The machine decided. Which really means: responsibility disappears.
And whenever responsibility disappears, the vulnerable pay first.
I work with people who sit in rooms thick with fear. I have watched hands shake as someone tries to hold herself together with her own fingers. I have heard the sentence, “I don’t want to die. I just don’t know how to live like this.”
Those are not the same thing.
A machine cannot tell the difference. It cannot hear what is being said beneath the words. It cannot recognize the holy contradiction of a heart that wants relief, not erasure.
We are being told this is compassionate. But compassion does not outsource presence. It does not mechanize mercy. It does not replace human moral courage with software trained on human confusion. And it certainly does not build prettier boxes to die in while calling it dignity.
If this future arrives, it will not come with sirens. It will come with a sleek design. With “safety protocols.” With soft avatars and reassuring fonts. With language about autonomy, efficiency, and burden reduction.
But beneath it will be something ancient and ugly: The quiet agreement that some lives are negotiable.
Christians should recognize this moment.
Not as a tech debate. But as a theological one.
Who gets to decide what a life is worth?
Who is allowed to reason about death?
And what does it say about us if we are willing to hand that authority to machines because human suffering has become inconvenient to sit with?
We were never meant to solve pain by eliminating the person in pain. We were meant to bear one another’s burdens.
That work is slow. It is costly. It requires presence. And it cannot be automated.
If we give moral reasoning about life and death to AI, we may gain efficiency. But we will lose reverence.
And once reverence is gone, humanity is always next.


Chilling.