Training A New Apprentice
Experiments with new technology aimed at furthering the Kingdom
I recently had a live conversation with Gabriel Stuckey, a very talented and creative thinker and maker of all sorts of things. Gabriel is a kindred spirit, whose thoughts and observations often mirror my own. As someone who makes a living with tools, we discussed the role that AI has and will continue to play in the future, both for good and ill.
While our familiarity with various types of tools tends to diminish our fears of technological boogeymen, we recognized that there is something different about artificial intelligence based tools compared to even the most sophisticated manufacturing systems, which have more in common with hammers and saws. The analogy I shared, which has developed through previous experiments with Grok and ChatGPT, is that AI are more like apprentices than traditional tools. Not only are they “controlled” through communication and instruction, rather than direct manipulation, but they need to be formed and trained according to a specific purpose.
To that end, I have finally found the correct kind of “apprentice” for the projects I have been working on in Google’s NotebookLM. Unlike the other LLMs, which tend to get bogged down rather quickly with a lot of user generated content, NotebookLM is designed to handle huge amounts of text, which can be searched, sorted, and summarized, with less hallucinations.
I was blown away by its ability to not only summarize in text, but create audio and video content that aims to highlight the “gist” of a particular work - even handling the 800 page PDF of my book in short order. However, with such a massive amount of information, it’s bound to overlook far more than it can highlight, and it was interesting to see which points it seemed to pull forth as “key” and “central” and “main”-- words that the explainer videos love to throw out there.
Like the other LLM’s I have used, its summaries are better suited to more limited sections of text. I have been feeding it sections that go together, and it does much better with these. To get the overall, high level view that I thought was most central, I had to monkey around with things quite a bit.
First, I wrote a supplemental document of around 900 words, which describes how I think the book ought to be read. Adding that along with the massive text did little to shift the needle. Then, I used that shorter document to have it create a specific prompt for summarizing the larger text. That was better, but not perfect. My best attempt came from including the document and the prompt along with a specific list of chapters, eliminating several sections that are tangential to the overall themes of the book.
Those themes were something that emerged over time, as I wrote the text, and it was my decision to keep and present “rough drafts” that bogged down the initial experiment. So in essence, I gave it the stripped down version that might serve as a basis for a future revision, but without dissecting the actual chapters.
The interesting thing is how interacting with these “tools” or “apprentices” sharpens my understanding of my own work. When I first published the book, it was even longer, so long that I could only print it in a phonebook sized format. When I got the author’s proof of that first version, I realized it was ridiculously large. This prompted the removal of about 20% of the text, by removing whole sections and chapters, (with a few very selective edits of other chapters) down to the 800 pg. 6”x9” paperback version that has been available on Amazon for close to a year.
In that first revision, I also reordered the chapters to give it a bit more of a coherent flow, which also allowed it to be neatly divided into two volumes that could meet the stricter criteria for hardcover publication. There were a few mistakes however. Some chapters were moved up, (8-10, specifically) and references to “as previously stated” were now incorrect. Those mistakes live on as proof of human error.
I also just recently realized, through the process of recording chapters on YouTube, that the epilogue references chapters that had been culled from that massive phonebook. Another fun easter egg. As I continue to work on a much more focused, stripped down volume, I have begun using various LLM’s to identify and fix those kinds of problems.
Unlike the previous “all by myself” effort of my first book, It is still my intention to get some real human feedback and editing for the new work, which recontextualizes about 8 chapters of Tents Before Temples, very tentatively titled Tectonic Philosophy. I wanted to see what easy mistakes could be caught and fixed with the help of AI, before foisting a raw manuscript on any friends that might be generous enough to help out.
But first, I wanted to wrap up the loose ends of my previous project, which is essentially going to be totally free and available for the general public, for both altruistic and promotional purposes. This meant recording all the chapters to put on YouTube, and eventually Audible as well. I’m not going to go through and fix all the little reading stumbles; when I misread, I reread. But it’s still a massive amount of content, and it’s tough to get people interested in something with a potential time commitment nearing a full week’s work.
Making a series of promotional videos that would summarize the sections was one of the first tasks used NotebookLM for. It’s been a learning process, but since the book was really written as a series of “too long for a blog, but not quite a book” length essays, it does great with the sections of 3-10 chapters that comprise a given subject. Getting people to watch an AI generated video is another problem, but not nearly so insurmountable, since they are only 5-7 minutes long, and I have some creative ideas to spice things up.
Another task I set before my new “apprentice” was to create summaries for the description of each video. I asked for something between 100-150 words, and had it include any links referenced as well. This was easy, and since the full text is already present in each video, I don’t feel bad about not personally summarizing what I had already written.
I then asked it to create a consistent naming convention for the titles of each video, and it did a great job with this as well. By reducing “Tents Before Temples - Chapter…” to TBT Ch., each title had room for a single phrase hook. Most were pretty basic and unobjectionable, but I changed a few to reflect what I thought was most important. Whether or not this boosts my SEO remains to be seen, but I do appreciate the consistent look within the playlist. The thumbnails are my own creation, I am solely responsible if people don’t like them.
But while I was messing around, I also gave my “apprentice” some tests and busy work. It gave me a tweetable quote for each chapter. Some were decent, others not so much, but it does show me the exact place within the text where each quote is found—no hallucinations.
Next, I had it write a list of biographical facts about myself, based on the text. This was something I had previously attempted with Grok and ChatGPT, and neither were able to keep the full text of the book in their memory well enough to get the task done. Once again, each statement was coupled with a direct source from the text, and it was quite thorough. I didn’t attempt to have it psychoanalyze me.
I then had it produce a bibliography of all the works cited. Some of them aren’t really appropriate, since they are sources mentioned within quotes, but erring on the side of thoroughness is nothing to criticize an apprentice for. It mentioned the bible as “referenced widely”, so I then had it give me a complete list of biblical references, which it did in chronological order, with context from the book. It was A LOT– and I’m glad to say, very evenly distributed throughout the canon of scripture, but heavy on the Torah & Gospels, the two clearest “entry points” into scripture, which was intentional.
As I wrap up this project with my first book, I’ll give my official endorsement to NotebookLM as the best way for an inquisitive, but busy person to interact with the text. It’s an experience that’s still enhanced by having the separate chapter PDF’s, but even in its raw, 800 page form, anyone can begin asking questions about particular ideas or sources and it will give you accurate answers.
In the future, I intend to use it to find points of similarity and difference with other thinkers. At least one party, Malcolm & Simone Collins, will no doubt appreciate the outcomes, since they have also made their entire corpus of writing available for interaction with various LLMs. Others might be more turned off, but it could at least help ME find the best inroads to initiate conversations.
The complete playlist of chapters can be found HERE, with new ones added daily.
As a final note, within the description of the final video to be released, the Epilogue of Tents Before Temples, I included the link to the full PDF. I’ve been more than willing to share it with anyone who inquires, but perhaps this might lead to some unknown person stumbling across my work. I leave in the hands of God, who is more powerful than any algorithm, even as I try to use the algorithms for his glory.

That's cool Michael. I hadn't heard of that LLM yet. I'll have to check it out.
Glad to see you're still moving forward on your project!