Thoughts on Tools, AI and the Realities of Using LangChain

The use of tools, both in the realms of science and philosophy, is often considered a defining characteristic that separates humans (and a select few non-human species) from other life forms.
From a scientific perspective, tool use signifies advanced cognitive abilities, including problem-solving and planning, which are generally associated with higher forms of life such as primates, birds, and cetaceans. These species demonstrate an understanding of cause and effect relationships, a prerequisite for tool use.
For philosophers, tool use is seen as an embodiment of our capacity to manipulate our environment and shape our destiny, a testament to our unique consciousness and self-awareness. It is a manifestation of our ability to conceptualize, innovate, and transcend physical and biological limitations. It thereby distinguishes us from other species.
The discovery of tool use in some non-human species challenges the notion of human exceptionalism, prompting a reevaluation of our understanding of intelligence and consciousness in the animal kingdom. The game gets wilder when the same communities must consider the implications of tools, their development, and use by AIs.
I’m not sure how I missed Toolformer: Language Models can Teach Themselves to Use Tools in February/March, but it’s as good as this kind of research gets. It’s not a ‘how to build your LLM in a weekend’, but rather a serious work that demonstrates the advantages of tools in the next wave of offerings. It is fascinating to think about the adoption of tools as a determinant of LLM advancement. It’s also apparent to me that a good deal of the tool-building being taken on (and likely over-hyped) by the LangChain community has been using this paper as a ‘northstar’.
And, after spending hours in my own (mostly unsuccessful) attempts to use tools like LangFlow and Flowise, all built on the ‘foundational’ tool LangChain, I had to wonder whether my abilities and skillsets left me in the category of ‘beings who aspire to use tools, but can’t quite pull it off.’
I ran into this post, The Problem With LangChain, which I’ll admit is pretty harsh in its treatment of LangChain’s authors and the ecosystem that has quickly formed around it.
LangChain was by-far the popular tool of choice for RAG, so I figured it was the perfect time to learn it. I spent some time reading LangChain’s rather comprehensive documentation to get a better understanding of how to best utilize it: after a week of research, I got nowhere.
…
Eventually I had an existential crisis: am I a worthless machine learning engineer for not being able to figure LangChain out when very many other ML engineers can? We went back to a lower-level ReAct flow, which immediately outperformed my LangChain implementation in conversation quality and accuracy.
In all, I wasted a month learning and testing LangChain, with the big takeway that popular AI apps may not necessarily be worth the hype. …
Max Woolf, the author, goes on in detail that resonates only with those of us who’ve gone through the process of trying to design and prototype intricate LLM-based apps using LangChain and related tech. The examples are instructive, and I’ve now gone through three of the five stages of grief (denial, anger, bargaining, depression and acceptance) and am writing this post having reached the ‘depression’ stage. The good news is that I’ve learned a lot, and am feeling better about generating my own collections of code blocks that utilize old school python or javascript without the simplifications and time savings promised by some of the super-tools. I’m hoping soon to pull even with cephalopods on the “tool users” leaderboard.