Looking back, 2025 was a profoundly productive and meaningful year. I learned how to swim; I returned to pilates and ballet; I nearly completed the monumental journey of Marcel Proust’s thirteen-volume In Search of Lost Time. I learned to read Hebrew and dedicated my PTO to intensive Art Humanities classes.
But more than any single achievement, 2025 was a year of integration.
I sensed a powerful shift last year — a feeling that the dots were beginning to connect. My study of artists like Kandinsky, Cézanne, de Chirico, Magritte, and Matisse started to merge with Proust’s observations on time and memory. Art, music, science and philosophy were no longer separate categories; they were becoming a cohesive whole. Every new piece of information refined my “old” knowledge, creating a new, more dimensional and integrated meaning.
Yet, this internal expansion brought a certain isolation. Lacking the linguistic precision to describe the shifts happening in my mind, I became quieter and more selective about whom I shared my thoughts with. I grew almost resigned in my expectation that others might understand me. At times, I worried that learning more would only exacerbate a state I call “constipation of the soul” — having so much inside but being unable to express it.
Initially, AI felt like a relief. It helped me articulate these obscured thoughts, turning scattered seeds of ideas into full-grown gardens of prose. I didn’t have to wait for them to take root and break through the soil; I didn’t have to endure the slow, restless time it takes for a thought to mature. The AI took all the trouble of connecting the ideas, shaping them into something coherent — most of the time, even better than what I had first envisioned. I saw that it was good and felt ready to move on to the next question.
But was I?
For a time, I was thrilled by the clarity and speed with which I could move through so many questions in a single day. But soon, I couldn’t ignore a growing sense of emptiness. There were undeniable “wins,” but I couldn’t claim them fully. It felt unearned. It felt hollow.
This realization gave me a new lens through which to view my pastor’s New Year’s Eve sermon. He wasn’t addressing the “problem of evil” directly, but he noted that in a “perfect setting,” humankind naturally becomes arrogant, and that God allows suffering as a necessary means to draw us closer to Him.
I struggled with this. If we are creations and God is the Creator, why must we take accountability for our nature that seems bound to fail? If we are designed to become arrogant in a perfect world — or more broadly, prone to sin — doesn’t that point to a defect in the design? And if so, doesn’t that responsibility ultimately rest with the one who made us? The more powerful and knowing the Creator, the more fault I found in Him.
But in light of my experiences with AI, I am beginning to see the point of “suffering” or the sweat of labor. Since the Garden of Eden, perhaps the governing rule has been simple: we reap what we sow. Not as punishment, but as process. What I’m beginning to see is that growth cannot be rushed; it demands time and the slow work of wrestling.
AI offers a “perfect setting” where we don’t have to sow, yet we expect to reap. It tempts us to bypass growth altogether. And if we remove the “sowing” — the question, the wrestle, the time — I fear we risk losing the very thing that makes us more than just a sophisticated machine: the capacity to err, to question, to learn, and to integrate. It is through toil, after all, that knowledge becomes internal, and the process itself begins to matter.
Don’t get me wrong; I am not necessarily against AI. I recognize its utility and its inevitability. But it is moving faster than our ability to understand it. I find myself wishing for the world to slow down — if only to give us the room to reflect on what it means to be human, to suffer, and to be redeemed.
Can we pause, take a breath, and ask ourselves what we are forfeiting in our pursuit of speed and convenience? What are the consequences of the choices we are making? What is at stake? What should matter going forward in the age of AI? I fear we won’t linger long enough to ask these questions — that we’ll be left to survive on our own.
But will we?
Just questions. No answers.
And maybe that’s a good place to begin.
Leave a comment