Rabbinic Discourse on Artificial Intelligence is Upside-Down (Part 1)
For totally understandable (if lamentable) reasons
Since ChatGPT and other LLMs became available to the public around two years ago, there's been a flood of rabbinic “responses,” or Torah articles trying to use traditional sources and Jewish ways of thinking to make sense of these new technologies. It is not my way to criticize anyone who seems to be earnestly trying to apply the Torah to our confusing and complicated world, and yet… it seems to me that the “rabbinic discourse” on Artificial Intelligence as a whole is completely upside-down.
By upside-down, I mean something specific: the questions receiving the most rabbinic attention are the least consequential, while those that could have the most profound implications for humanity's future are barely discussed at all. Rabbis1 are writing quite a lot about whether an AI can help you learn Gemara, but barely addressing whether or how these technologies might fundamentally reshape our society or even threaten our existence as we know it.
The Four Questions
My son, at the Pesach Seder, asks me four questions (I mean, he will if he wants to get lots of praise from his parents, and also chocolates). But here are the four questions that nobody asked me, because I haven’t promised treats to anyone who does - nor should anyone expect me to have the answers. With all that being said, it seems like these are the four major questions that someone, somewhere, should ask about AI - together with another separate question that hovers above and below each of these four (the first two are somewhat specific to text-generating LLMs, but for our purposes we’ll just keep “AI” as a big enough umbrella term) in order of increasing importance:
Can the AI (help you) do mitzvot (or transgressions)? Mostly, this discussion is about the mitzvah of learning Torah: can the LLM learn Torah? Teach Torah? Answer your halachic questions?2
Are AIs “alive”? Do they have souls? And if so/not, what (if anything) is so special about humans anyways?
Will the AIs take away a significant number of our jobs, causing widescale economic disruption?
Is there a significant possibility that AIs will go rogue and try to destroy humanity or otherwise precipitate some mass catastrophe/major societal collapse?
These four questions are, to some extent or another, empirical: these are questions about the world either as it is or as it will be. But all of them may be relevant to the real, overriding question, which is the moral/practical one: However you answer question [1/2/3/4], does that have implications for whether you should or should not be actually using or building these AI tools? Each of the four “theoretical” questions has multiple “applied” versions. Here is just a sample:
Assuming the AI can help learn and teach Torah, should we use it this way? Would that be allowed? Would it be obligatory, as in, must a person use whatever tools available to learn as much Torah as possible? Is there something spiritually corrupting about sharing your Torah study process with a non-human?
Should you be making/interacting with soulless creatures that can have a conservation with you as if it were a person? Are there rules for how you should behave towards this inorganic thinking thing?
Would it be a good thing to ‘disrupt’ inefficient industries using the power of AI? If you do not expect major economic effects from AI adoption, does that mean that we should limit the resources afforded to AI companies, and the like?
Does a conscientious Jew have any responsibility to avoid contributing what would likely be minuscule amounts of uncertain risk?
I say that rabbinic discourse is upside-down because it seems to me that the attention currently being given to these questions is inversely proportional to their importance.
Question 1 (Can AI help with Torah study and mitzvot?) has generated countless articles and lectures; about a quarter of the Torah I see online seems to feel the need to throw in something about ChatGPT these days.
Question 2 (Do AIs have souls?) gets a moderate amount of attention, with more philosophical discussions exploring the metaphysical implications of artificial intelligence through the lens of Jewish thought.
Question 3 (Will AI take our jobs?) receives occasional mentions, usually in passing and without deep engagement with economic realities, precedents from Jewish history about technological disruption, or even more general considerations of technological advancements more generally.
Question 4 (Could AI lead to catastrophic outcomes?) has gotten virtually no serious treatment in rabbinic literature, despite being the most consequential question of all if the answer is “yes” or even “maybe.” If there is even a small chance that advanced AI could pose an existential threat, shouldn't this be given our urgent attention? Within this question, we can also throw in the myriad potential harms to society posed by widely adopting AI systems, and not just major threats to civilization as we know it.3
There are two very good reasons for why the amount of writing on these questions is upside-down, as I described it. First, as an empirical matter, the later numbers on this list are more speculative, and second, there are less explicit Torah sources that speak to these issues. Let me elaborate.
Many normal people, rabbis/Jewish writers included, are a bit uncomfortable with the uncanny abilities of AI for reasons they may or may not be able to articulate so well. However, even for these folks, the existence and availability of tools such as chatGPT doesn’t seem like the kind of emergency that requires breaking the glass to pull an alarm (any more than the existence of television or TikTok). When someone calls up the rabbi to ask if their chicken soup is kosher, the rabbi won’t ask, “before you tell me anything else, did you make sure that your kitchen is not on fire and that your children are safe?” That should not really be a likely enough scenario for the rabbi to raise it as a concern. Here too, with the advent of LLMs, there is no obvious reason for a rabbi to stop and say “But is the bridge falling down?” before examining its implications for the municipal eruv. When it comes to AI, there is no alarm going off, and so the rabbis just aren’t thinking of the possibility that there may be a gigantic AI wave coming up on the horizon with potentially disastrous consequences.
The second reason for the state of rabbinic discourse is because we have Torah sources readily available to respond to the "easy" but (relatively) unimportant questions. There is no Gemara about technology-induced societal collapse (as far as I know); Maimonides never considered the possibility that a machine could mimic human thought. Frameworks of tort law involving bulls might be applied to motor vehicles, but when it comes to the bigger questions surrounding AI, it is not evident what the proper Torah framing would be, if there even is any.
Personally, though, I think that the Torah (and its interpretation through the long and book-filled history of the Jewish people) does have ways to get at the major questions around AI, if we think a bit more creatively. But it is also crucial to recognize that if our sources do fail us, that doesn’t mean we should stick our heads in the sand and give up thinking. Humans, not just Jews, have always had to navigate new and uncertain realities, and God gave us minds to use even if we can’t tie down our ideas to obvious Torah precedents.
Before moving on, instead of citing examples of rabbis who are not thinking about the hard questions, I’d like to point out three counterexamples of the phenomenon I’m complaining about here. There are people who have actually been thinking seriously about AI through the lens of Jewish tradition, and doing it well, including (click for links):
David Zvi Kalman, who describes himself as a “Jewish futurist” and wrote a PhD on how halakha responded to innovations in timekeeping (and thus literally an expert in the field of Jews and technology)
Moshe Koppel (and his co-authors), an AI researcher at Bar-Ilan University who co-created Dicta and is also something of a Jewish futurist
Rabbi Jonathan Ziring, a rosh yeshiva whose lectures on how the interconnectivity of the modern world impacts halakha have been made into a book, and who has been giving many thoughtful shiurim on AI and related topics which can be found on YUTorah.org
Answering the Easy Questions
Q1: Learning Torah from/with the AI
Some versions of these questions are kind of vapid, like fancier ways of asking “Does the LLM get a ‘gold star’ in God’s Heavenly scorekeeping book for learning Torah today?” That is obviously nonsense. Can you use the LLM as your Torah study-partner? Sure, but keep in mind that while your nonhuman partner may be articulate, it is prone to spewing convincing falsehoods (although this has been improving lately).
Should you ask it to decide halakha? The traditional rabbis will inevitably say “No,” just like you shouldn’t get your halakhic (or medical) advice from the internet - and yet in real life everybody does that anyway. (When I say “everybody,” I don’t mean you, of course; you have a deeply authentic and healthy relationship with a human rabbinical expert who is as finely attuned to your personal situation in life as he is a Torah genius. I’m talking about everyone else).
When it comes to the topic of Torah learning, we have plenty of relevant Torah source-texts (no surprise there), and so it is not hard to marshal dozens of sources about who can teach Torah, what counts as learning Torah, and the sanctity of the Torah transmission process. We have discussions of the spiritual qualification of the teacher, debates on whether writing counts as learning, and arguments about the required intent during study. In some ways, this is low hanging fruit, but it is still worth pointing out (and praising!) articles such as Rabbi Wiederblank’s, which has done a good job collecting and categorizing these types of sources, as well as other articles in the attached packet. In a totally different way, I should also mention (again, positively!) efforts by people such as Josh Waxman and my friends over at LLMOD who are demonstrating in practice how to responsibly and productively learn Torah with the AI (or, in most cases, showing how they the LLMs are not quite capable yet). Besides, ahem, my own highly insignificant writing on the topic.
There are a few angles and sources here that I think can lead to productive thinking about using AI, but I want to highlight one in particular that leads into the next question about souls. New technologies, whether the printing press, revolving tables (yes really), or computer databases have had their fair share of supporters and detractors among rabbinic writers. But is there some unique spiritual danger in outsourcing part of your creative process - whether in Torah or in other areas - to a machine?
The closest precedent, as weird as it might sound, may be the question of learning Torah from angels, or some spiritual creature roughly akin to angels. For many pre-modern rabbis, this was a live, practical question. One of the Tosafists wrote a book of the Torah he was "taught" through divinely inspired dreams, R. Yosef Karo recorded a diary of Torah and personal notes that were told to him by an angelic embodiment of the Mishnah, and even the more scientifically minded Avraham ibn Ezra believed that he received a letter from "the Sabbath" (as in, some spiritual entity embodying the seventh day of the week).
Even angelic Torah, however, was not accepted uncritically by the rabbis; the Gaon of Vilna, for one, said that he refused to listen to any such non-human messengers, no matter how many of them appeared at his door eager to reveal to him the secrets of Torah. The "Chida," R. Chaim Yosef David Azulai, writes that among potential ghosts, only Elijah the prophet can legitimately resolve uncertainties in halakha only because he remained human, at least for the purposes resolving our Talmudic quandaries (Birkei Yosef O.C. 32; Petah Einayim to Bava Metzia 36a). It is for this reason that the next question is a relevant one - whether or not the AI has a soul directly impacts how we should relate to its teachings.
Q2: Do AIs Have Souls?
The story goes that in Plato's lectures in his Academy, he defined man (as in, “the human”) as a "featherless biped," so Diogenes the Cynic plucked all feathers from one of his chickens and brought it into the Academy saying, "behold, Plato's man!" This ancient incidence of trolling is a cute reminder that defining the unique quality of being human has always been harder than it looks, well before LLMs came onto the scene. One could find at least a dozen different views among the classical Jewish commentators (and again, Rabbi Wiederblank has done pretty much exactly that).
Even though the specific question of whether chatGPT has a soul is totally immaterial (ha!) it is just a silly way to articulate some serious ones. One answer to what makes the human soul something special is its creative capacity; interestingly, this distinctiveness is picked up by both R. Yosef Dov Soloveitchik4 of the 20th century and his great-grandfather’s great-grandfather, R. Chaim of Volozhon. Although they use very different terminology, both of them emphasize that it is the human ability to create - specifically, intellectually - that reflects their divine image, the Creator with a capital “C”. What does it mean, then, that the AIs appear to be able to do something that looks very much like creative work? And more importantly, what happens when we outsource our own creativity to an AI? Or conversely, could AI be a tool that frees us from routine not-such-intellectual labor to pursue higher forms of creativity—like how the printing press (conceivably) freed scribes from endless copying to focus on commentary and innovation?
This question (and its implications) seems more pressing to me than the first, and here too there are some surprising precedents in Jewish literature for thinking about not-quite-human artificial agents. None is more prominent than the golem, the mythical humanoid created by kabbalistic magic. This creature graces the pages of many Jewish legends, but not so many practical rabbinic treaties until about 1700, when Rabbi Zvi Ashkenazi, known by the name of his book “the Chacham Tzvi,” wrote a responsum on the question of whether a golem counts for a minyan. Personally, if I created a golem, "does it count for a minyan" would very much not be in the list of first 10 questions I would have for my rabbi. But this responsum - written at the cusp of an era of technological revolution when the line between magic and science was still blurry to nonexistent - opened up many new questions about golems that are being dusted off to see how they may or may not apply to our twenty-first century silicon minds.5
Crucially, though, we must consider the possibility that even if the AI does not have a soul, it still might be capable of eating yours, like the dementors of Harry Potter or Ammit the Egyptian "soul eating" goddess. Consider a few lines from Roald Dahl's Charlie and the Chocolate Factory:
IT ROTS THE SENSE IN THE HEAD!
IT KILLS IMAGINATION DEAD!
IT CLOGS AND CLUTTERS UP THE MIND!
IT MAKES A CHILD SO DULL AND BLIND
HE CAN NO LONGER UNDERSTAND
A FANTASY, A FAIRYLAND!
HIS BRAIN BECOMES AS SOFT AS CHEESE!
HIS POWERS OF THINKING RUST AND FREEZE!
HE CANNOT THINK -- HE ONLY SEES!
The "it" in question is the television, which of course now we all know fifty years later is perfectly harmless,6 but the warning sounds equally applicable to anything that might soften the brain and rust one's thinking powers. This connects directly to our concerns about learning Torah from non-human entities. Just as the rabbis were cautious about angelic messengers, we should be wary about what happens to our own cognitive abilities when we outsource our thinking to artificial intelligence.
A friend of mine mentioned another potential soul-eating feature of massively productive AI: it sucks up your own ambitions for thinking and writing creatively. This cuts deep for me, as someone who has been spending nearly a decade intermittently working on writing a few books (that will all require at least another decade to finish). What would it mean if my efforts can be so easily replicated, or what kind of accomplishment will it be if I give my 80% completed manuscript to the AI to finish for me? The AI-supplemented endeavor can feel hollow, like I'm cheating myself out of… something? And even if I refuse to use the AI, is my completed work less meaningful since it can be replicated at thousands-of-words-per-minute by a text-generating machine?
Since other people have written well about this, I don’t feel the need to provide any further direction here, other than to echo the warnings of those ‘better and greater than I,’ that we should be cautious of what we feed to the mysterious beast.
There's also another soul-related issue worth addressing: how we treat our very helpful AI mind-slaves says something about us. Moses expressed gratitude toward non-human objects—thanking the Nile for sheltering him as a baby and the earth for swallowing the Egyptians' blood during the plagues. If he could express gratitude to inanimate objects, shouldn't we consider how we interact with entities that at least simulate all the things that humans do to represent their internal states of mind, even if there is no sentient “thing” on the inside? The way we speak to these AI systems—whether with gratitude, cruelty, or indifference—shapes our own character. When our children (or ourselves) learn that they can be rude or abusive to Alexa or Siri with no consequences, what does that teach them/us about respect, or about power dynamics in relationships? Halakha prohibits causing suffering to animals; can we really be so sure that the LLMs are totally incapable of feeling, when they mimic it so well?
To Be Continued…
Gotta go get ready for Pesach, so that’s all for now. With God’s help, I plan to return after Pesach with a follow-up on my ideas about how to at least think about (if not actually answer) the two harder questions. In the meanwhile, I plan to spend some time talking to flesh-and-blood humans, eating food, and reading words said by humans and printed upon dead treestuff. You are encouraged to do the same!
As I already said, but will use a footnote to further emphasize: I try really hard not to criticize any Jew or group of Jews (ever, but especially not publicly, and especially not rabbis who are thinking through tough issues) so I must state again that my intent is very much not to criticize or blame anyone here, even if the title of my post sounds accusatory. לא באתי אלא להעיר, I just want to draw attention (in perhaps a somewhat provocative way) to something I think should be attended to.
Because I want to have only four questions, I’m considering the common “applying this new technology to Torah” question as just a specific instantiation of a more general question, which is, “how does this new technology apply to mitzvah X,” where X can be Shabbat, beit din rulings, mikra bikkuim, sippur yetziyat mitzrayim, or countless other things.
For now, these are vague on purpose, and I will hopefully provide a bit more of an explanation in the follow-up post. But there are roughly two issues here: one is for the AI to get “out of control” in some way, and the other is simply that there is a danger in having a powerful tool placed in the hands of humans who want to do bad things to other humans. For the first type of risk, click here to see the most recent description of a plausible scenario of how AI progress itself, merely continuing on its current trajectory would pose a threat to the safety of humanity. Understanding the second type of risk takes much less imagination. “Algorithms” have already been causing material harm (albeit on relatively smaller scales) even before machine learning became a real thing; see Weapons of Math Destruction by Cathy O’Neal (Crown Books, 2016).
Cf. Halakhic Man p. 99ff; Halakhic Mind p. 78; Five Addresses (Ch. 2)
The brilliant and fascinating new book, Hakham Tsevi Ashkenazi and the Battlegrounds of the Early Modern Rabbinate by Yosie Levine (Littman Library, 2024), on p. 188-191, has a brief but excellent discussion of both the intellectual climate within which this halakhic ruling was written, and various rabbinic responses to it.
Even though I’m the one who wrote this phrase, I actually don’t know if it is meant to be joking/sarcastic or not.

>> "לא באתי אלא להעיר"
Have you considered getting out of the city at least now and again, to spend time in the rustic, mind-expanding countryside?
I only read the intro so far but already wholeheartedly agree with you! Printing this out as my Chag read