An important idea that I came across some years ago is that our cultures and values often supersede our personalities and influence many of our decisions at a subconscious level, often without our awareness. This process is largely independent of our personal opinions on the matter and is relatively impossible to alter, avoid, or change.
However, the conclusions drawn from this idea tend to be somewhat exaggerated. I don’t believe we should fear the subconscious any more than we should fear anything that constitutes a fundamental part of ourselves. Ultimately, it simply “is,” and any attempts to tame it seem ill-conceived to me, driven by a pathological need for control that refuses to accept there are indeed aspects within us that fall outside of our control.
So, what do these previous statements have to do with AI and electronic models of language?
Well, if we accept as a fact that everything we do, write, paint, or read contains elements influenced by this incomprehensible and ever-present subconscious, it is equally reasonable to assume that this is a universal phenomenon. Therefore, every great literary work of the past carries within it the “soul” of its creator. The more an author has left for us to read, the more pieces of their soul we have available.
AI is trained on these texts.
This means that we could ask AI models to impersonate virtually any figure who has produced enough literary work, and we might receive an answer “in the style” of that person. However, I think this concept is better illustrated in practice.
Keep in mind, though, that using AI in this way assumes that you, as a user, are deeply familiar with the texts and writings in question (remember, we are bound by echo chambers). Let’s imagine for a moment having a discussion with Frank Herbert about his thoughts on AI development.
It would go a bit like this:
- We might start the interaction by confirming the potential accuracy of the exercise with the model. This step is important because it indicates the expected level of accuracy. Remember, it is your own knowledge that will help you determine whether the result should be dismissed or accepted.

Figure: Confirming the model’s fidelity in ChatGPT.
- Once you have received an answer, you can ask the AI model to “impersonate” the person in question. It may be helpful to think of this as a conversation you might have with another person, complete with courtesies such as introducing yourself. In my experience, some AI models even produce more accurate results when the user remains polite.

Figure: Asking the model to impersonate the author in ChatGPT.
- After the model has begun to mimic the style of the intended person, it becomes an “echo,” and the responses we receive should resemble how a conversation with that person might have sounded.

Figure: Introducing yourself and starting a conversation in ChatGPT.
In the end, we cannot truly know what Frank Herbert would have thought about AI systems—whether he would have found them pernicious, marvelous, or even helpful. The thoughts and agency of an individual belong solely to that person.
However, we can take solace in the fact that we have access to the writings of geniuses and giants, and in this way, they continue to accompany humanity long after they are gone.





Share what you think!